00:00:00.002 Started by upstream project "autotest-per-patch" build number 127212 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.022 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.023 The recommended git tool is: git 00:00:00.023 using credential 00000000-0000-0000-0000-000000000002 00:00:00.026 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.041 Fetching changes from the remote Git repository 00:00:00.046 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.067 Using shallow fetch with depth 1 00:00:00.067 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.067 > git --version # timeout=10 00:00:00.085 > git --version # 'git version 2.39.2' 00:00:00.085 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.113 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.113 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.786 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.795 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.804 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:02.804 > git config core.sparsecheckout # timeout=10 00:00:02.813 > git read-tree -mu HEAD # timeout=10 00:00:02.828 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:02.846 Commit message: "packer: Add bios builder" 00:00:02.847 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:02.930 [Pipeline] Start of Pipeline 00:00:02.979 [Pipeline] library 00:00:02.980 Loading library shm_lib@master 00:00:02.980 Library shm_lib@master is cached. Copying from home. 00:00:02.991 [Pipeline] node 00:00:03.003 Running on WFP19 in /var/jenkins/workspace/crypto-phy-autotest 00:00:03.004 [Pipeline] { 00:00:03.010 [Pipeline] catchError 00:00:03.011 [Pipeline] { 00:00:03.019 [Pipeline] wrap 00:00:03.026 [Pipeline] { 00:00:03.031 [Pipeline] stage 00:00:03.032 [Pipeline] { (Prologue) 00:00:03.176 [Pipeline] sh 00:00:03.456 + logger -p user.info -t JENKINS-CI 00:00:03.474 [Pipeline] echo 00:00:03.475 Node: WFP19 00:00:03.483 [Pipeline] sh 00:00:03.773 [Pipeline] setCustomBuildProperty 00:00:03.783 [Pipeline] echo 00:00:03.784 Cleanup processes 00:00:03.790 [Pipeline] sh 00:00:04.073 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.073 465474 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.085 [Pipeline] sh 00:00:04.364 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:04.364 ++ grep -v 'sudo pgrep' 00:00:04.364 ++ awk '{print $1}' 00:00:04.364 + sudo kill -9 00:00:04.364 + true 00:00:04.380 [Pipeline] cleanWs 00:00:04.447 [WS-CLEANUP] Deleting project workspace... 00:00:04.447 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.454 [WS-CLEANUP] done 00:00:04.459 [Pipeline] setCustomBuildProperty 00:00:04.478 [Pipeline] sh 00:00:04.760 + sudo git config --global --replace-all safe.directory '*' 00:00:04.832 [Pipeline] httpRequest 00:00:04.852 [Pipeline] echo 00:00:04.853 Sorcerer 10.211.164.101 is alive 00:00:04.860 [Pipeline] httpRequest 00:00:04.864 HttpMethod: GET 00:00:04.864 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:04.865 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:04.866 Response Code: HTTP/1.1 200 OK 00:00:04.867 Success: Status code 200 is in the accepted range: 200,404 00:00:04.867 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:05.497 [Pipeline] sh 00:00:05.779 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:05.792 [Pipeline] httpRequest 00:00:05.813 [Pipeline] echo 00:00:05.815 Sorcerer 10.211.164.101 is alive 00:00:05.820 [Pipeline] httpRequest 00:00:05.824 HttpMethod: GET 00:00:05.825 URL: http://10.211.164.101/packages/spdk_79c77cd8688f48e6e80e1571341837da4151dd66.tar.gz 00:00:05.825 Sending request to url: http://10.211.164.101/packages/spdk_79c77cd8688f48e6e80e1571341837da4151dd66.tar.gz 00:00:05.839 Response Code: HTTP/1.1 200 OK 00:00:05.840 Success: Status code 200 is in the accepted range: 200,404 00:00:05.840 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_79c77cd8688f48e6e80e1571341837da4151dd66.tar.gz 00:01:15.520 [Pipeline] sh 00:01:15.803 + tar --no-same-owner -xf spdk_79c77cd8688f48e6e80e1571341837da4151dd66.tar.gz 00:01:19.102 [Pipeline] sh 00:01:19.382 + git -C spdk log --oneline -n5 00:01:19.382 79c77cd86 nvmf: add support for a passthrough subsystem 00:01:19.382 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:19.382 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:19.382 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:19.382 d005e023b raid: fix empty slot not updated in sb after resize 00:01:19.394 [Pipeline] } 00:01:19.412 [Pipeline] // stage 00:01:19.422 [Pipeline] stage 00:01:19.425 [Pipeline] { (Prepare) 00:01:19.444 [Pipeline] writeFile 00:01:19.462 [Pipeline] sh 00:01:19.747 + logger -p user.info -t JENKINS-CI 00:01:19.761 [Pipeline] sh 00:01:20.044 + logger -p user.info -t JENKINS-CI 00:01:20.056 [Pipeline] sh 00:01:20.337 + cat autorun-spdk.conf 00:01:20.337 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.337 SPDK_TEST_BLOCKDEV=1 00:01:20.337 SPDK_TEST_ISAL=1 00:01:20.337 SPDK_TEST_CRYPTO=1 00:01:20.337 SPDK_TEST_REDUCE=1 00:01:20.337 SPDK_TEST_VBDEV_COMPRESS=1 00:01:20.337 SPDK_RUN_UBSAN=1 00:01:20.337 SPDK_TEST_ACCEL=1 00:01:20.344 RUN_NIGHTLY=0 00:01:20.349 [Pipeline] readFile 00:01:20.376 [Pipeline] withEnv 00:01:20.378 [Pipeline] { 00:01:20.392 [Pipeline] sh 00:01:20.674 + set -ex 00:01:20.674 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:01:20.674 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:01:20.674 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.674 ++ SPDK_TEST_BLOCKDEV=1 00:01:20.674 ++ SPDK_TEST_ISAL=1 00:01:20.674 ++ SPDK_TEST_CRYPTO=1 00:01:20.674 ++ SPDK_TEST_REDUCE=1 00:01:20.674 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:01:20.674 ++ SPDK_RUN_UBSAN=1 00:01:20.674 ++ SPDK_TEST_ACCEL=1 00:01:20.674 ++ RUN_NIGHTLY=0 00:01:20.674 + case $SPDK_TEST_NVMF_NICS in 00:01:20.674 + DRIVERS= 00:01:20.674 + [[ -n '' ]] 00:01:20.674 + exit 0 00:01:20.684 [Pipeline] } 00:01:20.701 [Pipeline] // withEnv 00:01:20.707 [Pipeline] } 00:01:20.724 [Pipeline] // stage 00:01:20.734 [Pipeline] catchError 00:01:20.736 [Pipeline] { 00:01:20.751 [Pipeline] timeout 00:01:20.751 Timeout set to expire in 1 hr 0 min 00:01:20.753 [Pipeline] { 00:01:20.769 [Pipeline] stage 00:01:20.772 [Pipeline] { (Tests) 00:01:20.788 [Pipeline] sh 00:01:21.071 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:01:21.071 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:01:21.071 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:01:21.071 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:01:21.071 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:21.071 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:01:21.071 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:01:21.071 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:01:21.071 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:01:21.071 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:01:21.071 + [[ crypto-phy-autotest == pkgdep-* ]] 00:01:21.071 + cd /var/jenkins/workspace/crypto-phy-autotest 00:01:21.071 + source /etc/os-release 00:01:21.071 ++ NAME='Fedora Linux' 00:01:21.071 ++ VERSION='38 (Cloud Edition)' 00:01:21.071 ++ ID=fedora 00:01:21.071 ++ VERSION_ID=38 00:01:21.071 ++ VERSION_CODENAME= 00:01:21.071 ++ PLATFORM_ID=platform:f38 00:01:21.071 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:21.071 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.071 ++ LOGO=fedora-logo-icon 00:01:21.071 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:21.071 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.071 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:21.071 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.071 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.071 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.071 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:21.071 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.071 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:21.071 ++ SUPPORT_END=2024-05-14 00:01:21.071 ++ VARIANT='Cloud Edition' 00:01:21.071 ++ VARIANT_ID=cloud 00:01:21.071 + uname -a 00:01:21.071 Linux spdk-wfp-19 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:21.071 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:01:25.261 Hugepages 00:01:25.261 node hugesize free / total 00:01:25.261 node0 1048576kB 0 / 0 00:01:25.261 node0 2048kB 0 / 0 00:01:25.261 node1 1048576kB 0 / 0 00:01:25.261 node1 2048kB 0 / 0 00:01:25.261 00:01:25.261 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:25.261 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:25.261 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:25.261 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:25.261 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:25.261 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:25.261 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:25.261 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:25.261 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:25.261 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:25.261 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:25.261 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:25.261 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:25.261 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:25.261 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:25.261 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:25.261 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:25.261 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:25.261 + rm -f /tmp/spdk-ld-path 00:01:25.261 + source autorun-spdk.conf 00:01:25.261 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.261 ++ SPDK_TEST_BLOCKDEV=1 00:01:25.261 ++ SPDK_TEST_ISAL=1 00:01:25.261 ++ SPDK_TEST_CRYPTO=1 00:01:25.261 ++ SPDK_TEST_REDUCE=1 00:01:25.261 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:01:25.261 ++ SPDK_RUN_UBSAN=1 00:01:25.261 ++ SPDK_TEST_ACCEL=1 00:01:25.261 ++ RUN_NIGHTLY=0 00:01:25.261 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:25.261 + [[ -n '' ]] 00:01:25.262 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:25.262 + for M in /var/spdk/build-*-manifest.txt 00:01:25.262 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:25.262 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:01:25.262 + for M in /var/spdk/build-*-manifest.txt 00:01:25.262 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:25.262 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:01:25.262 ++ uname 00:01:25.262 + [[ Linux == \L\i\n\u\x ]] 00:01:25.262 + sudo dmesg -T 00:01:25.262 + sudo dmesg --clear 00:01:25.262 + dmesg_pid=467083 00:01:25.262 + [[ Fedora Linux == FreeBSD ]] 00:01:25.262 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.262 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.262 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:25.262 + [[ -x /usr/src/fio-static/fio ]] 00:01:25.262 + export FIO_BIN=/usr/src/fio-static/fio 00:01:25.262 + FIO_BIN=/usr/src/fio-static/fio 00:01:25.262 + sudo dmesg -Tw 00:01:25.262 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:25.262 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:25.262 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:25.262 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.262 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.262 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:25.262 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.262 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.262 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:01:25.262 Test configuration: 00:01:25.262 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.262 SPDK_TEST_BLOCKDEV=1 00:01:25.262 SPDK_TEST_ISAL=1 00:01:25.262 SPDK_TEST_CRYPTO=1 00:01:25.262 SPDK_TEST_REDUCE=1 00:01:25.262 SPDK_TEST_VBDEV_COMPRESS=1 00:01:25.262 SPDK_RUN_UBSAN=1 00:01:25.262 SPDK_TEST_ACCEL=1 00:01:25.262 RUN_NIGHTLY=0 13:01:05 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:01:25.262 13:01:05 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:25.262 13:01:05 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:25.262 13:01:05 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:25.262 13:01:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.262 13:01:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.262 13:01:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.262 13:01:05 -- paths/export.sh@5 -- $ export PATH 00:01:25.262 13:01:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.262 13:01:05 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:01:25.262 13:01:05 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:25.262 13:01:05 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721991665.XXXXXX 00:01:25.262 13:01:05 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721991665.MmjKll 00:01:25.262 13:01:05 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:25.262 13:01:05 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:25.262 13:01:05 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:01:25.262 13:01:05 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:25.262 13:01:05 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:25.262 13:01:05 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:25.262 13:01:05 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:25.262 13:01:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.262 13:01:05 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:01:25.262 13:01:05 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:25.262 13:01:05 -- pm/common@17 -- $ local monitor 00:01:25.262 13:01:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.262 13:01:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.262 13:01:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.262 13:01:05 -- pm/common@21 -- $ date +%s 00:01:25.262 13:01:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.262 13:01:05 -- pm/common@21 -- $ date +%s 00:01:25.262 13:01:05 -- pm/common@25 -- $ sleep 1 00:01:25.262 13:01:05 -- pm/common@21 -- $ date +%s 00:01:25.262 13:01:05 -- pm/common@21 -- $ date +%s 00:01:25.262 13:01:05 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721991665 00:01:25.262 13:01:05 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721991665 00:01:25.262 13:01:05 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721991665 00:01:25.262 13:01:05 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721991665 00:01:25.262 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721991665_collect-vmstat.pm.log 00:01:25.262 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721991665_collect-cpu-load.pm.log 00:01:25.262 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721991665_collect-cpu-temp.pm.log 00:01:25.262 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721991665_collect-bmc-pm.bmc.pm.log 00:01:26.200 13:01:06 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:26.200 13:01:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:26.200 13:01:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:26.200 13:01:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:01:26.200 13:01:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:26.200 Fri Jul 26 11:01:06 AM UTC 2024 00:01:26.200 13:01:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:26.200 v24.09-pre-322-g79c77cd86 00:01:26.200 13:01:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:26.200 13:01:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:26.200 13:01:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:26.200 13:01:06 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:26.200 13:01:06 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:26.200 13:01:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.200 ************************************ 00:01:26.200 START TEST ubsan 00:01:26.200 ************************************ 00:01:26.200 13:01:06 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:26.200 using ubsan 00:01:26.200 00:01:26.200 real 0m0.001s 00:01:26.200 user 0m0.000s 00:01:26.200 sys 0m0.000s 00:01:26.200 13:01:06 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:26.200 13:01:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:26.200 ************************************ 00:01:26.200 END TEST ubsan 00:01:26.200 ************************************ 00:01:26.200 13:01:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:26.200 13:01:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:26.200 13:01:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:26.200 13:01:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:26.200 13:01:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:26.200 13:01:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:26.200 13:01:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:26.200 13:01:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:26.200 13:01:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:26.459 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:01:26.459 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:26.718 Using 'verbs' RDMA provider 00:01:42.986 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:57.901 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:57.901 Creating mk/config.mk...done. 00:01:57.901 Creating mk/cc.flags.mk...done. 00:01:57.901 Type 'make' to build. 00:01:57.901 13:01:36 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:57.901 13:01:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:57.901 13:01:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:57.901 13:01:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.901 ************************************ 00:01:57.901 START TEST make 00:01:57.901 ************************************ 00:01:57.901 13:01:36 make -- common/autotest_common.sh@1125 -- $ make -j112 00:01:57.901 make[1]: Nothing to be done for 'all'. 00:02:36.627 The Meson build system 00:02:36.627 Version: 1.3.1 00:02:36.627 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:02:36.627 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:02:36.627 Build type: native build 00:02:36.627 Program cat found: YES (/usr/bin/cat) 00:02:36.627 Project name: DPDK 00:02:36.627 Project version: 24.03.0 00:02:36.627 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:36.627 C linker for the host machine: cc ld.bfd 2.39-16 00:02:36.627 Host machine cpu family: x86_64 00:02:36.627 Host machine cpu: x86_64 00:02:36.627 Message: ## Building in Developer Mode ## 00:02:36.627 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:36.627 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:36.627 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:36.627 Program python3 found: YES (/usr/bin/python3) 00:02:36.627 Program cat found: YES (/usr/bin/cat) 00:02:36.627 Compiler for C supports arguments -march=native: YES 00:02:36.627 Checking for size of "void *" : 8 00:02:36.627 Checking for size of "void *" : 8 (cached) 00:02:36.627 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:36.627 Library m found: YES 00:02:36.627 Library numa found: YES 00:02:36.627 Has header "numaif.h" : YES 00:02:36.627 Library fdt found: NO 00:02:36.627 Library execinfo found: NO 00:02:36.627 Has header "execinfo.h" : YES 00:02:36.627 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:36.627 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:36.627 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:36.627 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:36.627 Run-time dependency openssl found: YES 3.0.9 00:02:36.627 Run-time dependency libpcap found: YES 1.10.4 00:02:36.627 Has header "pcap.h" with dependency libpcap: YES 00:02:36.627 Compiler for C supports arguments -Wcast-qual: YES 00:02:36.627 Compiler for C supports arguments -Wdeprecated: YES 00:02:36.627 Compiler for C supports arguments -Wformat: YES 00:02:36.627 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:36.627 Compiler for C supports arguments -Wformat-security: NO 00:02:36.627 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.627 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:36.627 Compiler for C supports arguments -Wnested-externs: YES 00:02:36.627 Compiler for C supports arguments -Wold-style-definition: YES 00:02:36.627 Compiler for C supports arguments -Wpointer-arith: YES 00:02:36.627 Compiler for C supports arguments -Wsign-compare: YES 00:02:36.627 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:36.627 Compiler for C supports arguments -Wundef: YES 00:02:36.627 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.627 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:36.627 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:36.627 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.627 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:36.627 Program objdump found: YES (/usr/bin/objdump) 00:02:36.627 Compiler for C supports arguments -mavx512f: YES 00:02:36.627 Checking if "AVX512 checking" compiles: YES 00:02:36.627 Fetching value of define "__SSE4_2__" : 1 00:02:36.627 Fetching value of define "__AES__" : 1 00:02:36.627 Fetching value of define "__AVX__" : 1 00:02:36.627 Fetching value of define "__AVX2__" : 1 00:02:36.627 Fetching value of define "__AVX512BW__" : 1 00:02:36.627 Fetching value of define "__AVX512CD__" : 1 00:02:36.627 Fetching value of define "__AVX512DQ__" : 1 00:02:36.627 Fetching value of define "__AVX512F__" : 1 00:02:36.627 Fetching value of define "__AVX512VL__" : 1 00:02:36.627 Fetching value of define "__PCLMUL__" : 1 00:02:36.627 Fetching value of define "__RDRND__" : 1 00:02:36.628 Fetching value of define "__RDSEED__" : 1 00:02:36.628 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:36.628 Fetching value of define "__znver1__" : (undefined) 00:02:36.628 Fetching value of define "__znver2__" : (undefined) 00:02:36.628 Fetching value of define "__znver3__" : (undefined) 00:02:36.628 Fetching value of define "__znver4__" : (undefined) 00:02:36.628 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:36.628 Message: lib/log: Defining dependency "log" 00:02:36.628 Message: lib/kvargs: Defining dependency "kvargs" 00:02:36.628 Message: lib/telemetry: Defining dependency "telemetry" 00:02:36.628 Checking for function "getentropy" : NO 00:02:36.628 Message: lib/eal: Defining dependency "eal" 00:02:36.628 Message: lib/ring: Defining dependency "ring" 00:02:36.628 Message: lib/rcu: Defining dependency "rcu" 00:02:36.628 Message: lib/mempool: Defining dependency "mempool" 00:02:36.628 Message: lib/mbuf: Defining dependency "mbuf" 00:02:36.628 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:36.628 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.628 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:36.628 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:36.628 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:36.628 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:36.628 Compiler for C supports arguments -mpclmul: YES 00:02:36.628 Compiler for C supports arguments -maes: YES 00:02:36.628 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:36.628 Compiler for C supports arguments -mavx512bw: YES 00:02:36.628 Compiler for C supports arguments -mavx512dq: YES 00:02:36.628 Compiler for C supports arguments -mavx512vl: YES 00:02:36.628 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:36.628 Compiler for C supports arguments -mavx2: YES 00:02:36.628 Compiler for C supports arguments -mavx: YES 00:02:36.628 Message: lib/net: Defining dependency "net" 00:02:36.628 Message: lib/meter: Defining dependency "meter" 00:02:36.628 Message: lib/ethdev: Defining dependency "ethdev" 00:02:36.628 Message: lib/pci: Defining dependency "pci" 00:02:36.628 Message: lib/cmdline: Defining dependency "cmdline" 00:02:36.628 Message: lib/hash: Defining dependency "hash" 00:02:36.628 Message: lib/timer: Defining dependency "timer" 00:02:36.628 Message: lib/compressdev: Defining dependency "compressdev" 00:02:36.628 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:36.628 Message: lib/dmadev: Defining dependency "dmadev" 00:02:36.628 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:36.628 Message: lib/power: Defining dependency "power" 00:02:36.628 Message: lib/reorder: Defining dependency "reorder" 00:02:36.628 Message: lib/security: Defining dependency "security" 00:02:36.628 Has header "linux/userfaultfd.h" : YES 00:02:36.628 Has header "linux/vduse.h" : YES 00:02:36.628 Message: lib/vhost: Defining dependency "vhost" 00:02:36.628 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:36.628 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:02:36.628 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:36.628 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:36.628 Compiler for C supports arguments -std=c11: YES 00:02:36.628 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:02:36.628 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:02:36.628 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:02:36.628 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:02:36.628 Run-time dependency libmlx5 found: YES 1.24.44.0 00:02:36.628 Run-time dependency libibverbs found: YES 1.14.44.0 00:02:36.628 Library mtcr_ul found: NO 00:02:36.628 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:02:36.628 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:02:36.628 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:02:36.628 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:02:36.629 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:02:36.629 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:02:36.629 Configuring mlx5_autoconf.h using configuration 00:02:36.629 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:02:36.629 Run-time dependency libcrypto found: YES 3.0.9 00:02:36.629 Library IPSec_MB found: YES 00:02:36.629 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:02:36.629 Message: drivers/common/qat: Defining dependency "common_qat" 00:02:36.629 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:36.629 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:36.629 Library IPSec_MB found: YES 00:02:36.629 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:02:36.629 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:02:36.629 Compiler for C supports arguments -std=c11: YES (cached) 00:02:36.629 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:02:36.629 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:02:36.629 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:02:36.629 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:02:36.629 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:02:36.629 Run-time dependency libisal found: NO (tried pkgconfig) 00:02:36.629 Library libisal found: NO 00:02:36.629 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:02:36.629 Compiler for C supports arguments -std=c11: YES (cached) 00:02:36.629 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:02:36.629 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:02:36.629 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:02:36.629 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:02:36.629 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:02:36.629 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:36.629 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:36.629 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:36.629 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:36.629 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:36.629 Program doxygen found: YES (/usr/bin/doxygen) 00:02:36.629 Configuring doxy-api-html.conf using configuration 00:02:36.629 Configuring doxy-api-man.conf using configuration 00:02:36.629 Program mandb found: YES (/usr/bin/mandb) 00:02:36.629 Program sphinx-build found: NO 00:02:36.629 Configuring rte_build_config.h using configuration 00:02:36.629 Message: 00:02:36.629 ================= 00:02:36.629 Applications Enabled 00:02:36.629 ================= 00:02:36.629 00:02:36.629 apps: 00:02:36.629 00:02:36.629 00:02:36.629 Message: 00:02:36.629 ================= 00:02:36.629 Libraries Enabled 00:02:36.629 ================= 00:02:36.629 00:02:36.629 libs: 00:02:36.629 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:36.629 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:36.629 cryptodev, dmadev, power, reorder, security, vhost, 00:02:36.629 00:02:36.629 Message: 00:02:36.629 =============== 00:02:36.629 Drivers Enabled 00:02:36.629 =============== 00:02:36.629 00:02:36.629 common: 00:02:36.629 mlx5, qat, 00:02:36.629 bus: 00:02:36.629 auxiliary, pci, vdev, 00:02:36.629 mempool: 00:02:36.629 ring, 00:02:36.629 dma: 00:02:36.629 00:02:36.629 net: 00:02:36.629 00:02:36.629 crypto: 00:02:36.629 ipsec_mb, mlx5, 00:02:36.629 compress: 00:02:36.629 isal, mlx5, 00:02:36.629 vdpa: 00:02:36.629 00:02:36.629 00:02:36.629 Message: 00:02:36.629 ================= 00:02:36.629 Content Skipped 00:02:36.629 ================= 00:02:36.629 00:02:36.629 apps: 00:02:36.629 dumpcap: explicitly disabled via build config 00:02:36.629 graph: explicitly disabled via build config 00:02:36.629 pdump: explicitly disabled via build config 00:02:36.629 proc-info: explicitly disabled via build config 00:02:36.629 test-acl: explicitly disabled via build config 00:02:36.629 test-bbdev: explicitly disabled via build config 00:02:36.629 test-cmdline: explicitly disabled via build config 00:02:36.629 test-compress-perf: explicitly disabled via build config 00:02:36.629 test-crypto-perf: explicitly disabled via build config 00:02:36.629 test-dma-perf: explicitly disabled via build config 00:02:36.629 test-eventdev: explicitly disabled via build config 00:02:36.629 test-fib: explicitly disabled via build config 00:02:36.629 test-flow-perf: explicitly disabled via build config 00:02:36.629 test-gpudev: explicitly disabled via build config 00:02:36.629 test-mldev: explicitly disabled via build config 00:02:36.629 test-pipeline: explicitly disabled via build config 00:02:36.629 test-pmd: explicitly disabled via build config 00:02:36.629 test-regex: explicitly disabled via build config 00:02:36.629 test-sad: explicitly disabled via build config 00:02:36.629 test-security-perf: explicitly disabled via build config 00:02:36.629 00:02:36.629 libs: 00:02:36.629 argparse: explicitly disabled via build config 00:02:36.629 metrics: explicitly disabled via build config 00:02:36.629 acl: explicitly disabled via build config 00:02:36.629 bbdev: explicitly disabled via build config 00:02:36.629 bitratestats: explicitly disabled via build config 00:02:36.629 bpf: explicitly disabled via build config 00:02:36.629 cfgfile: explicitly disabled via build config 00:02:36.629 distributor: explicitly disabled via build config 00:02:36.629 efd: explicitly disabled via build config 00:02:36.629 eventdev: explicitly disabled via build config 00:02:36.629 dispatcher: explicitly disabled via build config 00:02:36.629 gpudev: explicitly disabled via build config 00:02:36.629 gro: explicitly disabled via build config 00:02:36.629 gso: explicitly disabled via build config 00:02:36.629 ip_frag: explicitly disabled via build config 00:02:36.629 jobstats: explicitly disabled via build config 00:02:36.629 latencystats: explicitly disabled via build config 00:02:36.629 lpm: explicitly disabled via build config 00:02:36.629 member: explicitly disabled via build config 00:02:36.629 pcapng: explicitly disabled via build config 00:02:36.629 rawdev: explicitly disabled via build config 00:02:36.629 regexdev: explicitly disabled via build config 00:02:36.629 mldev: explicitly disabled via build config 00:02:36.629 rib: explicitly disabled via build config 00:02:36.629 sched: explicitly disabled via build config 00:02:36.629 stack: explicitly disabled via build config 00:02:36.629 ipsec: explicitly disabled via build config 00:02:36.629 pdcp: explicitly disabled via build config 00:02:36.629 fib: explicitly disabled via build config 00:02:36.629 port: explicitly disabled via build config 00:02:36.629 pdump: explicitly disabled via build config 00:02:36.629 table: explicitly disabled via build config 00:02:36.629 pipeline: explicitly disabled via build config 00:02:36.629 graph: explicitly disabled via build config 00:02:36.629 node: explicitly disabled via build config 00:02:36.629 00:02:36.629 drivers: 00:02:36.629 common/cpt: not in enabled drivers build config 00:02:36.629 common/dpaax: not in enabled drivers build config 00:02:36.629 common/iavf: not in enabled drivers build config 00:02:36.629 common/idpf: not in enabled drivers build config 00:02:36.629 common/ionic: not in enabled drivers build config 00:02:36.629 common/mvep: not in enabled drivers build config 00:02:36.629 common/octeontx: not in enabled drivers build config 00:02:36.629 bus/cdx: not in enabled drivers build config 00:02:36.629 bus/dpaa: not in enabled drivers build config 00:02:36.629 bus/fslmc: not in enabled drivers build config 00:02:36.629 bus/ifpga: not in enabled drivers build config 00:02:36.629 bus/platform: not in enabled drivers build config 00:02:36.629 bus/uacce: not in enabled drivers build config 00:02:36.629 bus/vmbus: not in enabled drivers build config 00:02:36.629 common/cnxk: not in enabled drivers build config 00:02:36.629 common/nfp: not in enabled drivers build config 00:02:36.629 common/nitrox: not in enabled drivers build config 00:02:36.629 common/sfc_efx: not in enabled drivers build config 00:02:36.629 mempool/bucket: not in enabled drivers build config 00:02:36.629 mempool/cnxk: not in enabled drivers build config 00:02:36.629 mempool/dpaa: not in enabled drivers build config 00:02:36.629 mempool/dpaa2: not in enabled drivers build config 00:02:36.629 mempool/octeontx: not in enabled drivers build config 00:02:36.629 mempool/stack: not in enabled drivers build config 00:02:36.629 dma/cnxk: not in enabled drivers build config 00:02:36.629 dma/dpaa: not in enabled drivers build config 00:02:36.629 dma/dpaa2: not in enabled drivers build config 00:02:36.629 dma/hisilicon: not in enabled drivers build config 00:02:36.629 dma/idxd: not in enabled drivers build config 00:02:36.629 dma/ioat: not in enabled drivers build config 00:02:36.629 dma/skeleton: not in enabled drivers build config 00:02:36.629 net/af_packet: not in enabled drivers build config 00:02:36.629 net/af_xdp: not in enabled drivers build config 00:02:36.629 net/ark: not in enabled drivers build config 00:02:36.629 net/atlantic: not in enabled drivers build config 00:02:36.629 net/avp: not in enabled drivers build config 00:02:36.629 net/axgbe: not in enabled drivers build config 00:02:36.629 net/bnx2x: not in enabled drivers build config 00:02:36.629 net/bnxt: not in enabled drivers build config 00:02:36.629 net/bonding: not in enabled drivers build config 00:02:36.629 net/cnxk: not in enabled drivers build config 00:02:36.630 net/cpfl: not in enabled drivers build config 00:02:36.630 net/cxgbe: not in enabled drivers build config 00:02:36.630 net/dpaa: not in enabled drivers build config 00:02:36.630 net/dpaa2: not in enabled drivers build config 00:02:36.630 net/e1000: not in enabled drivers build config 00:02:36.630 net/ena: not in enabled drivers build config 00:02:36.630 net/enetc: not in enabled drivers build config 00:02:36.630 net/enetfec: not in enabled drivers build config 00:02:36.630 net/enic: not in enabled drivers build config 00:02:36.630 net/failsafe: not in enabled drivers build config 00:02:36.630 net/fm10k: not in enabled drivers build config 00:02:36.630 net/gve: not in enabled drivers build config 00:02:36.630 net/hinic: not in enabled drivers build config 00:02:36.630 net/hns3: not in enabled drivers build config 00:02:36.630 net/i40e: not in enabled drivers build config 00:02:36.630 net/iavf: not in enabled drivers build config 00:02:36.630 net/ice: not in enabled drivers build config 00:02:36.630 net/idpf: not in enabled drivers build config 00:02:36.630 net/igc: not in enabled drivers build config 00:02:36.630 net/ionic: not in enabled drivers build config 00:02:36.630 net/ipn3ke: not in enabled drivers build config 00:02:36.630 net/ixgbe: not in enabled drivers build config 00:02:36.630 net/mana: not in enabled drivers build config 00:02:36.630 net/memif: not in enabled drivers build config 00:02:36.630 net/mlx4: not in enabled drivers build config 00:02:36.630 net/mlx5: not in enabled drivers build config 00:02:36.630 net/mvneta: not in enabled drivers build config 00:02:36.630 net/mvpp2: not in enabled drivers build config 00:02:36.630 net/netvsc: not in enabled drivers build config 00:02:36.630 net/nfb: not in enabled drivers build config 00:02:36.630 net/nfp: not in enabled drivers build config 00:02:36.630 net/ngbe: not in enabled drivers build config 00:02:36.630 net/null: not in enabled drivers build config 00:02:36.630 net/octeontx: not in enabled drivers build config 00:02:36.630 net/octeon_ep: not in enabled drivers build config 00:02:36.630 net/pcap: not in enabled drivers build config 00:02:36.630 net/pfe: not in enabled drivers build config 00:02:36.630 net/qede: not in enabled drivers build config 00:02:36.630 net/ring: not in enabled drivers build config 00:02:36.630 net/sfc: not in enabled drivers build config 00:02:36.630 net/softnic: not in enabled drivers build config 00:02:36.630 net/tap: not in enabled drivers build config 00:02:36.630 net/thunderx: not in enabled drivers build config 00:02:36.630 net/txgbe: not in enabled drivers build config 00:02:36.630 net/vdev_netvsc: not in enabled drivers build config 00:02:36.630 net/vhost: not in enabled drivers build config 00:02:36.630 net/virtio: not in enabled drivers build config 00:02:36.630 net/vmxnet3: not in enabled drivers build config 00:02:36.630 raw/*: missing internal dependency, "rawdev" 00:02:36.630 crypto/armv8: not in enabled drivers build config 00:02:36.630 crypto/bcmfs: not in enabled drivers build config 00:02:36.630 crypto/caam_jr: not in enabled drivers build config 00:02:36.630 crypto/ccp: not in enabled drivers build config 00:02:36.630 crypto/cnxk: not in enabled drivers build config 00:02:36.630 crypto/dpaa_sec: not in enabled drivers build config 00:02:36.630 crypto/dpaa2_sec: not in enabled drivers build config 00:02:36.630 crypto/mvsam: not in enabled drivers build config 00:02:36.630 crypto/nitrox: not in enabled drivers build config 00:02:36.630 crypto/null: not in enabled drivers build config 00:02:36.630 crypto/octeontx: not in enabled drivers build config 00:02:36.630 crypto/openssl: not in enabled drivers build config 00:02:36.630 crypto/scheduler: not in enabled drivers build config 00:02:36.630 crypto/uadk: not in enabled drivers build config 00:02:36.630 crypto/virtio: not in enabled drivers build config 00:02:36.630 compress/nitrox: not in enabled drivers build config 00:02:36.630 compress/octeontx: not in enabled drivers build config 00:02:36.630 compress/zlib: not in enabled drivers build config 00:02:36.630 regex/*: missing internal dependency, "regexdev" 00:02:36.630 ml/*: missing internal dependency, "mldev" 00:02:36.630 vdpa/ifc: not in enabled drivers build config 00:02:36.630 vdpa/mlx5: not in enabled drivers build config 00:02:36.630 vdpa/nfp: not in enabled drivers build config 00:02:36.630 vdpa/sfc: not in enabled drivers build config 00:02:36.630 event/*: missing internal dependency, "eventdev" 00:02:36.630 baseband/*: missing internal dependency, "bbdev" 00:02:36.630 gpu/*: missing internal dependency, "gpudev" 00:02:36.630 00:02:36.630 00:02:36.630 Build targets in project: 115 00:02:36.630 00:02:36.630 DPDK 24.03.0 00:02:36.630 00:02:36.630 User defined options 00:02:36.630 buildtype : debug 00:02:36.630 default_library : shared 00:02:36.630 libdir : lib 00:02:36.630 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:02:36.630 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:02:36.630 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:02:36.630 cpu_instruction_set: native 00:02:36.630 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:36.630 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:36.630 enable_docs : false 00:02:36.630 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:02:36.630 enable_kmods : false 00:02:36.630 max_lcores : 128 00:02:36.630 tests : false 00:02:36.630 00:02:36.630 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.630 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:02:36.630 [1/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.630 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:36.630 [3/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:36.630 [4/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.630 [5/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:36.630 [6/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:36.630 [7/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:36.630 [8/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.630 [9/378] Linking static target lib/librte_kvargs.a 00:02:36.630 [10/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:36.630 [11/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:36.630 [12/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:36.630 [13/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:36.630 [14/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:36.630 [15/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:36.630 [16/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:36.630 [17/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:36.630 [18/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:36.630 [19/378] Linking static target lib/librte_log.a 00:02:36.630 [20/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:36.630 [21/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:36.630 [22/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:36.630 [23/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:36.630 [24/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:36.630 [25/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:36.630 [26/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:36.630 [27/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:36.630 [28/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:36.630 [29/378] Linking static target lib/librte_pci.a 00:02:36.630 [30/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:36.630 [31/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:36.630 [32/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:36.630 [33/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:36.630 [34/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:36.630 [35/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:36.630 [36/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:36.630 [37/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.630 [38/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:36.630 [39/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:36.630 [40/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:36.630 [41/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:36.630 [42/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:36.630 [43/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:36.630 [44/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:36.630 [45/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:36.630 [46/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:36.630 [47/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:36.630 [48/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:36.630 [49/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:36.630 [50/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:36.630 [51/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:36.631 [52/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:36.631 [53/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.631 [54/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:36.631 [55/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:36.631 [56/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:36.631 [57/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:36.631 [58/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:36.631 [59/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:36.631 [60/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:36.631 [61/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:36.631 [62/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:36.893 [63/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:36.893 [64/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:36.893 [65/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:36.893 [66/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:36.893 [67/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:36.893 [68/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:36.893 [69/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:36.893 [70/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:36.893 [71/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:36.893 [72/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:36.893 [73/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:36.893 [74/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:36.893 [75/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:36.893 [76/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:36.893 [77/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:36.893 [78/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:36.893 [79/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:36.893 [80/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:36.893 [81/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:36.893 [82/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:36.893 [83/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:36.893 [84/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:36.893 [85/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:36.893 [86/378] Linking static target lib/librte_telemetry.a 00:02:36.893 [87/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:36.893 [88/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:36.893 [89/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:36.893 [90/378] Linking static target lib/librte_ring.a 00:02:36.893 [91/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:36.893 [92/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:36.893 [93/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:36.893 [94/378] Linking static target lib/librte_meter.a 00:02:36.893 [95/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:36.893 [96/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:36.893 [97/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:36.893 [98/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:36.893 [99/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:36.893 [100/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:36.893 [101/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:36.893 [102/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:36.893 [103/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:36.893 [104/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:36.893 [105/378] Linking static target lib/librte_timer.a 00:02:36.893 [106/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:36.893 [107/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:02:36.893 [108/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:36.893 [109/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:36.893 [110/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:36.893 [111/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:36.893 [112/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:36.893 [113/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:36.893 [114/378] Linking static target lib/librte_cmdline.a 00:02:36.893 [115/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:36.893 [116/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:36.893 [117/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:36.893 [118/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:36.893 [119/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:02:36.893 [120/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:36.893 [121/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:37.152 [122/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:37.152 [123/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:37.152 [124/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:37.152 [125/378] Linking static target lib/librte_net.a 00:02:37.152 [126/378] Linking static target lib/librte_rcu.a 00:02:37.152 [127/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:37.152 [128/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:37.152 [129/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:37.152 [130/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:37.152 [131/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:37.152 [132/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:37.152 [133/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:37.152 [134/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:37.152 [135/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:37.152 [136/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:37.152 [137/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:37.152 [138/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:37.152 [139/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:37.152 [140/378] Linking static target lib/librte_mempool.a 00:02:37.152 [141/378] Linking static target lib/librte_compressdev.a 00:02:37.152 [142/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:37.152 [143/378] Linking static target lib/librte_dmadev.a 00:02:37.152 [144/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:37.152 [145/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:37.152 [146/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:37.152 [147/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:37.152 [148/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:37.152 [149/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:37.152 [150/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:37.414 [151/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:37.414 [152/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:37.414 [153/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:37.414 [154/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.414 [155/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:37.414 [156/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:02:37.414 [157/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:37.414 [158/378] Linking target lib/librte_log.so.24.1 00:02:37.414 [159/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:37.414 [160/378] Linking static target lib/librte_mbuf.a 00:02:37.414 [161/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.414 [162/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.414 [163/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:02:37.414 [164/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:37.414 [165/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:37.414 [166/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:37.414 [167/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.414 [168/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:02:37.672 [169/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:02:37.672 [170/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.672 [171/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:02:37.672 [172/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:37.672 [173/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.672 [174/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:37.672 [175/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:37.672 [176/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:37.672 [177/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.672 [178/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:37.672 [179/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:37.672 [180/378] Linking static target lib/librte_hash.a 00:02:37.672 [181/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:37.672 [182/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:37.672 [183/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:37.672 [184/378] Linking static target lib/librte_power.a 00:02:37.672 [185/378] Linking target lib/librte_kvargs.so.24.1 00:02:37.672 [186/378] Linking target lib/librte_telemetry.so.24.1 00:02:37.672 [187/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:37.672 [188/378] Linking static target lib/librte_reorder.a 00:02:37.672 [189/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:37.672 [190/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:37.672 [191/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:37.672 [192/378] Linking static target lib/librte_security.a 00:02:37.672 [193/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:37.672 [194/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:02:37.672 [195/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:02:37.672 [196/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:37.672 [197/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:02:37.672 [198/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:02:37.672 [199/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:02:37.672 [200/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:37.672 [201/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:02:37.672 [202/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:02:37.672 [203/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:02:37.672 [204/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:02:37.672 [205/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:02:37.672 [206/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:02:37.672 [207/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:02:37.672 [208/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:02:37.672 [209/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:02:37.672 [210/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:02:37.672 [211/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:37.672 [212/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:02:37.932 [213/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:02:37.932 [214/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:02:37.932 [215/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:37.932 [216/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:02:37.932 [217/378] Linking static target drivers/librte_bus_auxiliary.a 00:02:37.932 [218/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:02:37.932 [219/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:02:37.932 [220/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:37.932 [221/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:37.932 [222/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:02:37.933 [223/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:02:37.933 [224/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:02:37.933 [225/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:37.933 [226/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:37.933 [227/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:02:37.933 [228/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.933 [229/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:02:37.933 [230/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:02:37.933 [231/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:37.933 [232/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.933 [233/378] Linking static target lib/librte_cryptodev.a 00:02:37.933 [234/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.933 [235/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:02:37.933 [236/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:37.933 [237/378] Linking static target drivers/librte_bus_vdev.a 00:02:37.933 [238/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:02:37.933 [239/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:02:37.933 [240/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:02:37.933 [241/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:02:37.933 [242/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:02:37.933 [243/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:02:37.933 [244/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:02:37.933 [245/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:02:37.933 [246/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:37.933 [247/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:02:37.933 [248/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:37.933 [249/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:02:37.933 [250/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:02:37.933 [251/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:02:37.933 [252/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:37.933 [253/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:02:37.933 [254/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.933 [255/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:02:37.933 [256/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:37.933 [257/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:37.933 [258/378] Linking static target drivers/librte_bus_pci.a 00:02:37.933 [259/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:02:37.933 [260/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:02:37.933 [261/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:37.933 [262/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:02:37.933 [263/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:02:38.192 [264/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:02:38.192 [265/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.192 [266/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:02:38.192 [267/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:02:38.192 [268/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.192 [269/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:02:38.192 [270/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:02:38.192 [271/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:02:38.192 [272/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.192 [273/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:02:38.192 [274/378] Linking static target lib/librte_eal.a 00:02:38.192 [275/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.192 [276/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:02:38.192 [277/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:38.192 [278/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.192 [279/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.192 [280/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.192 [281/378] Linking static target drivers/librte_mempool_ring.a 00:02:38.192 [282/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.192 [283/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:02:38.192 [284/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:02:38.192 [285/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:02:38.192 [286/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:02:38.192 [287/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:02:38.192 [288/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:02:38.192 [289/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:02:38.192 [290/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:02:38.192 [291/378] Linking static target drivers/librte_compress_isal.a 00:02:38.192 [292/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.192 [293/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:02:38.452 [294/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:02:38.452 [295/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:02:38.452 [296/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.452 [297/378] Linking static target drivers/librte_crypto_mlx5.a 00:02:38.452 [298/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:02:38.452 [299/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:02:38.452 [300/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:02:38.452 [301/378] Linking static target drivers/librte_compress_mlx5.a 00:02:38.452 [302/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:38.452 [303/378] Linking static target lib/librte_ethdev.a 00:02:38.452 [304/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:02:38.452 [305/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:02:38.711 [306/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:02:38.711 [307/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:02:38.711 [308/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.711 [309/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:02:38.711 [310/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:02:38.711 [311/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.711 [312/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:38.711 [313/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.711 [314/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:02:38.970 [315/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:02:38.970 [316/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:02:38.970 [317/378] Linking static target drivers/librte_common_mlx5.a 00:02:38.970 [318/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:02:38.970 [319/378] Linking static target drivers/libtmp_rte_common_qat.a 00:02:39.230 [320/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:02:39.489 [321/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:39.489 [322/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:39.489 [323/378] Linking static target drivers/librte_common_qat.a 00:02:39.747 [324/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:39.747 [325/378] Linking static target lib/librte_vhost.a 00:02:40.005 [326/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.542 [327/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.113 [328/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.406 [329/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.699 [330/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.699 [331/378] Linking target lib/librte_eal.so.24.1 00:02:51.699 [332/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:51.699 [333/378] Linking target lib/librte_timer.so.24.1 00:02:51.699 [334/378] Linking target lib/librte_ring.so.24.1 00:02:51.699 [335/378] Linking target lib/librte_meter.so.24.1 00:02:51.699 [336/378] Linking target lib/librte_pci.so.24.1 00:02:51.699 [337/378] Linking target lib/librte_dmadev.so.24.1 00:02:51.699 [338/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:02:51.699 [339/378] Linking target drivers/librte_bus_vdev.so.24.1 00:02:51.699 [340/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:51.699 [341/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:51.699 [342/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:02:51.699 [343/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:51.699 [344/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:51.699 [345/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:02:51.699 [346/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:51.699 [347/378] Linking target lib/librte_rcu.so.24.1 00:02:51.699 [348/378] Linking target lib/librte_mempool.so.24.1 00:02:51.699 [349/378] Linking target drivers/librte_bus_pci.so.24.1 00:02:51.699 [350/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:51.699 [351/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:51.699 [352/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:02:51.699 [353/378] Linking target lib/librte_mbuf.so.24.1 00:02:51.699 [354/378] Linking target drivers/librte_mempool_ring.so.24.1 00:02:51.958 [355/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:51.958 [356/378] Linking target lib/librte_cryptodev.so.24.1 00:02:51.958 [357/378] Linking target lib/librte_reorder.so.24.1 00:02:51.958 [358/378] Linking target lib/librte_net.so.24.1 00:02:51.958 [359/378] Linking target lib/librte_compressdev.so.24.1 00:02:52.218 [360/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:52.218 [361/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:02:52.218 [362/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:52.218 [363/378] Linking target lib/librte_security.so.24.1 00:02:52.218 [364/378] Linking target lib/librte_hash.so.24.1 00:02:52.218 [365/378] Linking target lib/librte_cmdline.so.24.1 00:02:52.218 [366/378] Linking target drivers/librte_compress_isal.so.24.1 00:02:52.218 [367/378] Linking target lib/librte_ethdev.so.24.1 00:02:52.218 [368/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:02:52.218 [369/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:52.218 [370/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:52.477 [371/378] Linking target drivers/librte_common_mlx5.so.24.1 00:02:52.477 [372/378] Linking target lib/librte_power.so.24.1 00:02:52.477 [373/378] Linking target lib/librte_vhost.so.24.1 00:02:52.477 [374/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:02:52.477 [375/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:02:52.477 [376/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:02:52.477 [377/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:02:53.045 [378/378] Linking target drivers/librte_common_qat.so.24.1 00:02:53.045 INFO: autodetecting backend as ninja 00:02:53.045 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:54.423 CC lib/ut/ut.o 00:02:54.423 CC lib/ut_mock/mock.o 00:02:54.423 CC lib/log/log.o 00:02:54.423 CC lib/log/log_flags.o 00:02:54.423 CC lib/log/log_deprecated.o 00:02:54.423 LIB libspdk_ut.a 00:02:54.423 LIB libspdk_log.a 00:02:54.423 LIB libspdk_ut_mock.a 00:02:54.423 SO libspdk_ut.so.2.0 00:02:54.423 SO libspdk_ut_mock.so.6.0 00:02:54.423 SO libspdk_log.so.7.0 00:02:54.423 SYMLINK libspdk_ut.so 00:02:54.682 SYMLINK libspdk_ut_mock.so 00:02:54.682 SYMLINK libspdk_log.so 00:02:54.941 CC lib/ioat/ioat.o 00:02:54.941 CC lib/dma/dma.o 00:02:54.941 CC lib/util/base64.o 00:02:54.941 CC lib/util/bit_array.o 00:02:54.941 CC lib/util/crc32.o 00:02:54.941 CC lib/util/cpuset.o 00:02:54.941 CC lib/util/crc16.o 00:02:54.941 CC lib/util/crc32c.o 00:02:54.941 CXX lib/trace_parser/trace.o 00:02:54.941 CC lib/util/crc32_ieee.o 00:02:54.941 CC lib/util/crc64.o 00:02:54.941 CC lib/util/dif.o 00:02:54.941 CC lib/util/fd.o 00:02:54.941 CC lib/util/fd_group.o 00:02:54.941 CC lib/util/file.o 00:02:54.941 CC lib/util/hexlify.o 00:02:54.941 CC lib/util/iov.o 00:02:54.941 CC lib/util/math.o 00:02:54.941 CC lib/util/net.o 00:02:54.941 CC lib/util/pipe.o 00:02:54.941 CC lib/util/strerror_tls.o 00:02:54.941 CC lib/util/string.o 00:02:54.941 CC lib/util/uuid.o 00:02:54.941 CC lib/util/xor.o 00:02:54.941 CC lib/util/zipf.o 00:02:55.200 CC lib/vfio_user/host/vfio_user_pci.o 00:02:55.200 CC lib/vfio_user/host/vfio_user.o 00:02:55.200 LIB libspdk_dma.a 00:02:55.200 SO libspdk_dma.so.4.0 00:02:55.200 LIB libspdk_ioat.a 00:02:55.200 SYMLINK libspdk_dma.so 00:02:55.200 SO libspdk_ioat.so.7.0 00:02:55.459 SYMLINK libspdk_ioat.so 00:02:55.459 LIB libspdk_vfio_user.a 00:02:55.459 SO libspdk_vfio_user.so.5.0 00:02:55.459 SYMLINK libspdk_vfio_user.so 00:02:55.459 LIB libspdk_util.a 00:02:55.717 SO libspdk_util.so.10.0 00:02:55.717 SYMLINK libspdk_util.so 00:02:55.717 LIB libspdk_trace_parser.a 00:02:55.976 SO libspdk_trace_parser.so.5.0 00:02:55.976 SYMLINK libspdk_trace_parser.so 00:02:55.976 CC lib/rdma_provider/common.o 00:02:55.976 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:55.976 CC lib/vmd/vmd.o 00:02:55.976 CC lib/vmd/led.o 00:02:55.976 CC lib/conf/conf.o 00:02:55.976 CC lib/reduce/reduce.o 00:02:55.976 CC lib/rdma_utils/rdma_utils.o 00:02:55.976 CC lib/json/json_parse.o 00:02:55.976 CC lib/env_dpdk/env.o 00:02:55.976 CC lib/env_dpdk/memory.o 00:02:55.976 CC lib/json/json_util.o 00:02:56.234 CC lib/env_dpdk/pci.o 00:02:56.234 CC lib/env_dpdk/init.o 00:02:56.234 CC lib/json/json_write.o 00:02:56.234 CC lib/env_dpdk/threads.o 00:02:56.234 CC lib/env_dpdk/pci_virtio.o 00:02:56.234 CC lib/env_dpdk/pci_ioat.o 00:02:56.234 CC lib/env_dpdk/pci_vmd.o 00:02:56.234 CC lib/idxd/idxd_user.o 00:02:56.234 CC lib/env_dpdk/pci_idxd.o 00:02:56.234 CC lib/idxd/idxd.o 00:02:56.234 CC lib/env_dpdk/pci_event.o 00:02:56.234 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:56.234 CC lib/env_dpdk/sigbus_handler.o 00:02:56.234 CC lib/idxd/idxd_kernel.o 00:02:56.234 CC lib/env_dpdk/pci_dpdk.o 00:02:56.234 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:56.234 LIB libspdk_rdma_provider.a 00:02:56.234 LIB libspdk_conf.a 00:02:56.234 SO libspdk_rdma_provider.so.6.0 00:02:56.492 SO libspdk_conf.so.6.0 00:02:56.492 LIB libspdk_rdma_utils.a 00:02:56.492 SYMLINK libspdk_rdma_provider.so 00:02:56.492 SYMLINK libspdk_conf.so 00:02:56.492 SO libspdk_rdma_utils.so.1.0 00:02:56.492 LIB libspdk_json.a 00:02:56.492 SO libspdk_json.so.6.0 00:02:56.493 SYMLINK libspdk_rdma_utils.so 00:02:56.493 SYMLINK libspdk_json.so 00:02:56.751 LIB libspdk_idxd.a 00:02:56.751 SO libspdk_idxd.so.12.0 00:02:56.751 LIB libspdk_vmd.a 00:02:56.751 LIB libspdk_reduce.a 00:02:56.751 SO libspdk_vmd.so.6.0 00:02:56.751 SYMLINK libspdk_idxd.so 00:02:56.751 SO libspdk_reduce.so.6.1 00:02:56.751 SYMLINK libspdk_vmd.so 00:02:57.010 SYMLINK libspdk_reduce.so 00:02:57.010 CC lib/jsonrpc/jsonrpc_server.o 00:02:57.010 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:57.010 CC lib/jsonrpc/jsonrpc_client.o 00:02:57.010 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:57.270 LIB libspdk_jsonrpc.a 00:02:57.270 SO libspdk_jsonrpc.so.6.0 00:02:57.270 SYMLINK libspdk_jsonrpc.so 00:02:57.528 LIB libspdk_env_dpdk.a 00:02:57.529 SO libspdk_env_dpdk.so.15.0 00:02:57.529 SYMLINK libspdk_env_dpdk.so 00:02:57.787 CC lib/rpc/rpc.o 00:02:58.046 LIB libspdk_rpc.a 00:02:58.046 SO libspdk_rpc.so.6.0 00:02:58.046 SYMLINK libspdk_rpc.so 00:02:58.304 CC lib/keyring/keyring.o 00:02:58.304 CC lib/keyring/keyring_rpc.o 00:02:58.304 CC lib/trace/trace.o 00:02:58.304 CC lib/trace/trace_flags.o 00:02:58.304 CC lib/trace/trace_rpc.o 00:02:58.304 CC lib/notify/notify.o 00:02:58.304 CC lib/notify/notify_rpc.o 00:02:58.563 LIB libspdk_notify.a 00:02:58.563 LIB libspdk_trace.a 00:02:58.563 LIB libspdk_keyring.a 00:02:58.563 SO libspdk_notify.so.6.0 00:02:58.563 SO libspdk_trace.so.10.0 00:02:58.563 SO libspdk_keyring.so.1.0 00:02:58.822 SYMLINK libspdk_notify.so 00:02:58.822 SYMLINK libspdk_trace.so 00:02:58.822 SYMLINK libspdk_keyring.so 00:02:59.081 CC lib/sock/sock_rpc.o 00:02:59.081 CC lib/sock/sock.o 00:02:59.081 CC lib/thread/thread.o 00:02:59.081 CC lib/thread/iobuf.o 00:02:59.340 LIB libspdk_sock.a 00:02:59.599 SO libspdk_sock.so.10.0 00:02:59.599 SYMLINK libspdk_sock.so 00:02:59.858 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:59.858 CC lib/nvme/nvme_ctrlr.o 00:02:59.858 CC lib/nvme/nvme_fabric.o 00:02:59.858 CC lib/nvme/nvme_ns_cmd.o 00:02:59.858 CC lib/nvme/nvme_ns.o 00:02:59.858 CC lib/nvme/nvme_pcie_common.o 00:02:59.858 CC lib/nvme/nvme_pcie.o 00:02:59.858 CC lib/nvme/nvme_qpair.o 00:02:59.858 CC lib/nvme/nvme.o 00:02:59.858 CC lib/nvme/nvme_quirks.o 00:02:59.858 CC lib/nvme/nvme_transport.o 00:02:59.858 CC lib/nvme/nvme_discovery.o 00:02:59.858 CC lib/nvme/nvme_tcp.o 00:02:59.858 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.858 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:59.858 CC lib/nvme/nvme_opal.o 00:02:59.858 CC lib/nvme/nvme_io_msg.o 00:02:59.858 CC lib/nvme/nvme_poll_group.o 00:02:59.858 CC lib/nvme/nvme_zns.o 00:02:59.858 CC lib/nvme/nvme_stubs.o 00:02:59.858 CC lib/nvme/nvme_auth.o 00:02:59.858 CC lib/nvme/nvme_cuse.o 00:02:59.858 CC lib/nvme/nvme_rdma.o 00:03:00.794 LIB libspdk_thread.a 00:03:00.794 SO libspdk_thread.so.10.1 00:03:00.794 SYMLINK libspdk_thread.so 00:03:01.053 CC lib/virtio/virtio.o 00:03:01.053 CC lib/virtio/virtio_pci.o 00:03:01.053 CC lib/virtio/virtio_vhost_user.o 00:03:01.053 CC lib/virtio/virtio_vfio_user.o 00:03:01.053 CC lib/blob/blobstore.o 00:03:01.053 CC lib/blob/request.o 00:03:01.311 CC lib/blob/zeroes.o 00:03:01.311 CC lib/blob/blob_bs_dev.o 00:03:01.311 CC lib/accel/accel.o 00:03:01.311 CC lib/accel/accel_rpc.o 00:03:01.311 CC lib/accel/accel_sw.o 00:03:01.311 CC lib/init/json_config.o 00:03:01.311 CC lib/init/subsystem.o 00:03:01.311 CC lib/init/subsystem_rpc.o 00:03:01.311 CC lib/init/rpc.o 00:03:01.570 LIB libspdk_init.a 00:03:01.570 SO libspdk_init.so.5.0 00:03:01.570 LIB libspdk_virtio.a 00:03:01.570 SO libspdk_virtio.so.7.0 00:03:01.570 SYMLINK libspdk_init.so 00:03:01.570 SYMLINK libspdk_virtio.so 00:03:01.829 CC lib/event/app.o 00:03:01.829 CC lib/event/reactor.o 00:03:01.829 CC lib/event/log_rpc.o 00:03:01.829 CC lib/event/app_rpc.o 00:03:01.829 CC lib/event/scheduler_static.o 00:03:01.829 LIB libspdk_nvme.a 00:03:02.088 SO libspdk_nvme.so.13.1 00:03:02.088 LIB libspdk_accel.a 00:03:02.088 SO libspdk_accel.so.16.0 00:03:02.351 SYMLINK libspdk_accel.so 00:03:02.351 LIB libspdk_event.a 00:03:02.351 SO libspdk_event.so.14.0 00:03:02.351 SYMLINK libspdk_nvme.so 00:03:02.644 SYMLINK libspdk_event.so 00:03:02.645 CC lib/bdev/bdev.o 00:03:02.645 CC lib/bdev/bdev_rpc.o 00:03:02.645 CC lib/bdev/bdev_zone.o 00:03:02.645 CC lib/bdev/part.o 00:03:02.645 CC lib/bdev/scsi_nvme.o 00:03:04.031 LIB libspdk_blob.a 00:03:04.031 SO libspdk_blob.so.11.0 00:03:04.031 SYMLINK libspdk_blob.so 00:03:04.598 CC lib/lvol/lvol.o 00:03:04.598 CC lib/blobfs/blobfs.o 00:03:04.598 CC lib/blobfs/tree.o 00:03:05.166 LIB libspdk_bdev.a 00:03:05.166 SO libspdk_bdev.so.16.0 00:03:05.166 LIB libspdk_blobfs.a 00:03:05.166 SYMLINK libspdk_bdev.so 00:03:05.426 SO libspdk_blobfs.so.10.0 00:03:05.426 LIB libspdk_lvol.a 00:03:05.426 SO libspdk_lvol.so.10.0 00:03:05.426 SYMLINK libspdk_blobfs.so 00:03:05.426 SYMLINK libspdk_lvol.so 00:03:05.686 CC lib/ftl/ftl_core.o 00:03:05.686 CC lib/ftl/ftl_init.o 00:03:05.686 CC lib/nvmf/ctrlr.o 00:03:05.686 CC lib/ftl/ftl_layout.o 00:03:05.686 CC lib/ftl/ftl_debug.o 00:03:05.686 CC lib/ftl/ftl_io.o 00:03:05.686 CC lib/nvmf/ctrlr_discovery.o 00:03:05.686 CC lib/ftl/ftl_sb.o 00:03:05.686 CC lib/nvmf/ctrlr_bdev.o 00:03:05.686 CC lib/nvmf/subsystem.o 00:03:05.686 CC lib/ftl/ftl_l2p.o 00:03:05.686 CC lib/nvmf/transport.o 00:03:05.686 CC lib/nvmf/nvmf.o 00:03:05.686 CC lib/ftl/ftl_l2p_flat.o 00:03:05.686 CC lib/nvmf/nvmf_rpc.o 00:03:05.686 CC lib/ftl/ftl_nv_cache.o 00:03:05.686 CC lib/ftl/ftl_band.o 00:03:05.686 CC lib/nvmf/tcp.o 00:03:05.686 CC lib/nvmf/stubs.o 00:03:05.686 CC lib/ftl/ftl_band_ops.o 00:03:05.686 CC lib/scsi/dev.o 00:03:05.686 CC lib/nvmf/mdns_server.o 00:03:05.686 CC lib/ftl/ftl_writer.o 00:03:05.686 CC lib/scsi/lun.o 00:03:05.686 CC lib/nvmf/rdma.o 00:03:05.686 CC lib/nvmf/auth.o 00:03:05.686 CC lib/scsi/port.o 00:03:05.686 CC lib/ftl/ftl_rq.o 00:03:05.686 CC lib/scsi/scsi.o 00:03:05.686 CC lib/ftl/ftl_reloc.o 00:03:05.686 CC lib/scsi/scsi_bdev.o 00:03:05.686 CC lib/ftl/ftl_l2p_cache.o 00:03:05.686 CC lib/scsi/scsi_pr.o 00:03:05.686 CC lib/ftl/ftl_p2l.o 00:03:05.686 CC lib/scsi/scsi_rpc.o 00:03:05.686 CC lib/nbd/nbd.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt.o 00:03:05.686 CC lib/scsi/task.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:05.686 CC lib/nbd/nbd_rpc.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:05.686 CC lib/ublk/ublk.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:05.686 CC lib/ublk/ublk_rpc.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:05.686 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:05.686 CC lib/ftl/utils/ftl_conf.o 00:03:05.686 CC lib/ftl/utils/ftl_md.o 00:03:05.686 CC lib/ftl/utils/ftl_mempool.o 00:03:05.686 CC lib/ftl/utils/ftl_bitmap.o 00:03:05.686 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:05.686 CC lib/ftl/utils/ftl_property.o 00:03:05.686 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:05.686 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:05.686 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:05.686 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:05.686 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:05.686 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:05.686 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:05.686 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:05.686 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:05.686 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:05.686 CC lib/ftl/base/ftl_base_dev.o 00:03:05.686 CC lib/ftl/base/ftl_base_bdev.o 00:03:05.686 CC lib/ftl/ftl_trace.o 00:03:06.254 LIB libspdk_scsi.a 00:03:06.254 LIB libspdk_nbd.a 00:03:06.254 SO libspdk_scsi.so.9.0 00:03:06.513 SO libspdk_nbd.so.7.0 00:03:06.513 LIB libspdk_ublk.a 00:03:06.513 SO libspdk_ublk.so.3.0 00:03:06.513 SYMLINK libspdk_nbd.so 00:03:06.513 SYMLINK libspdk_scsi.so 00:03:06.513 SYMLINK libspdk_ublk.so 00:03:06.773 LIB libspdk_ftl.a 00:03:06.773 CC lib/iscsi/conn.o 00:03:06.773 CC lib/iscsi/init_grp.o 00:03:06.773 CC lib/iscsi/iscsi.o 00:03:06.773 CC lib/iscsi/param.o 00:03:06.773 CC lib/iscsi/md5.o 00:03:06.773 CC lib/iscsi/portal_grp.o 00:03:06.773 CC lib/vhost/vhost.o 00:03:06.773 CC lib/vhost/vhost_rpc.o 00:03:06.773 CC lib/iscsi/tgt_node.o 00:03:06.774 CC lib/vhost/vhost_scsi.o 00:03:06.774 CC lib/iscsi/iscsi_subsystem.o 00:03:06.774 CC lib/vhost/rte_vhost_user.o 00:03:06.774 CC lib/vhost/vhost_blk.o 00:03:06.774 CC lib/iscsi/iscsi_rpc.o 00:03:06.774 CC lib/iscsi/task.o 00:03:07.033 SO libspdk_ftl.so.9.0 00:03:07.292 SYMLINK libspdk_ftl.so 00:03:07.551 LIB libspdk_nvmf.a 00:03:07.811 SO libspdk_nvmf.so.19.0 00:03:07.811 LIB libspdk_vhost.a 00:03:07.811 SO libspdk_vhost.so.8.0 00:03:08.071 SYMLINK libspdk_nvmf.so 00:03:08.071 SYMLINK libspdk_vhost.so 00:03:08.071 LIB libspdk_iscsi.a 00:03:08.071 SO libspdk_iscsi.so.8.0 00:03:08.330 SYMLINK libspdk_iscsi.so 00:03:08.897 CC module/env_dpdk/env_dpdk_rpc.o 00:03:09.157 LIB libspdk_env_dpdk_rpc.a 00:03:09.157 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:09.157 CC module/sock/posix/posix.o 00:03:09.157 CC module/accel/iaa/accel_iaa_rpc.o 00:03:09.157 CC module/accel/iaa/accel_iaa.o 00:03:09.157 CC module/accel/error/accel_error.o 00:03:09.157 CC module/accel/error/accel_error_rpc.o 00:03:09.157 CC module/keyring/linux/keyring.o 00:03:09.157 CC module/keyring/linux/keyring_rpc.o 00:03:09.157 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:09.157 CC module/keyring/file/keyring.o 00:03:09.157 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:03:09.157 CC module/keyring/file/keyring_rpc.o 00:03:09.157 CC module/accel/ioat/accel_ioat.o 00:03:09.157 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:03:09.157 CC module/accel/dsa/accel_dsa.o 00:03:09.157 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:03:09.157 CC module/accel/ioat/accel_ioat_rpc.o 00:03:09.157 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:03:09.157 CC module/accel/dsa/accel_dsa_rpc.o 00:03:09.157 CC module/blob/bdev/blob_bdev.o 00:03:09.157 CC module/scheduler/gscheduler/gscheduler.o 00:03:09.157 SO libspdk_env_dpdk_rpc.so.6.0 00:03:09.157 SYMLINK libspdk_env_dpdk_rpc.so 00:03:09.157 LIB libspdk_accel_error.a 00:03:09.157 LIB libspdk_keyring_linux.a 00:03:09.416 LIB libspdk_keyring_file.a 00:03:09.416 LIB libspdk_scheduler_dpdk_governor.a 00:03:09.416 LIB libspdk_scheduler_gscheduler.a 00:03:09.416 SO libspdk_accel_error.so.2.0 00:03:09.416 SO libspdk_keyring_linux.so.1.0 00:03:09.416 LIB libspdk_accel_ioat.a 00:03:09.416 LIB libspdk_scheduler_dynamic.a 00:03:09.416 SO libspdk_keyring_file.so.1.0 00:03:09.416 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:09.416 SO libspdk_scheduler_gscheduler.so.4.0 00:03:09.416 LIB libspdk_accel_iaa.a 00:03:09.416 SO libspdk_scheduler_dynamic.so.4.0 00:03:09.416 SO libspdk_accel_ioat.so.6.0 00:03:09.416 SO libspdk_accel_iaa.so.3.0 00:03:09.416 SYMLINK libspdk_keyring_linux.so 00:03:09.416 SYMLINK libspdk_accel_error.so 00:03:09.416 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:09.416 LIB libspdk_accel_dsa.a 00:03:09.416 SYMLINK libspdk_keyring_file.so 00:03:09.416 SYMLINK libspdk_scheduler_gscheduler.so 00:03:09.416 LIB libspdk_blob_bdev.a 00:03:09.416 SYMLINK libspdk_accel_ioat.so 00:03:09.416 SO libspdk_accel_dsa.so.5.0 00:03:09.416 SYMLINK libspdk_scheduler_dynamic.so 00:03:09.416 SO libspdk_blob_bdev.so.11.0 00:03:09.416 SYMLINK libspdk_accel_iaa.so 00:03:09.416 SYMLINK libspdk_accel_dsa.so 00:03:09.416 SYMLINK libspdk_blob_bdev.so 00:03:09.675 LIB libspdk_sock_posix.a 00:03:09.933 SO libspdk_sock_posix.so.6.0 00:03:09.933 SYMLINK libspdk_sock_posix.so 00:03:09.933 CC module/bdev/null/bdev_null.o 00:03:09.933 CC module/bdev/null/bdev_null_rpc.o 00:03:09.933 CC module/bdev/malloc/bdev_malloc.o 00:03:09.933 CC module/bdev/nvme/bdev_nvme.o 00:03:09.933 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:09.933 CC module/bdev/nvme/bdev_mdns_client.o 00:03:09.933 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:09.933 CC module/bdev/nvme/nvme_rpc.o 00:03:09.933 CC module/bdev/nvme/vbdev_opal.o 00:03:09.933 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:09.933 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:09.933 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:09.933 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:09.933 CC module/bdev/lvol/vbdev_lvol.o 00:03:09.934 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:09.934 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:10.192 CC module/bdev/error/vbdev_error.o 00:03:10.192 CC module/bdev/gpt/vbdev_gpt.o 00:03:10.192 CC module/bdev/error/vbdev_error_rpc.o 00:03:10.192 CC module/blobfs/bdev/blobfs_bdev.o 00:03:10.192 CC module/bdev/iscsi/bdev_iscsi.o 00:03:10.192 CC module/bdev/gpt/gpt.o 00:03:10.192 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:10.192 CC module/bdev/passthru/vbdev_passthru.o 00:03:10.192 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:10.192 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:10.192 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:10.192 CC module/bdev/crypto/vbdev_crypto.o 00:03:10.192 CC module/bdev/compress/vbdev_compress.o 00:03:10.192 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:10.192 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:03:10.192 CC module/bdev/compress/vbdev_compress_rpc.o 00:03:10.192 CC module/bdev/split/vbdev_split_rpc.o 00:03:10.192 CC module/bdev/delay/vbdev_delay.o 00:03:10.192 CC module/bdev/split/vbdev_split.o 00:03:10.192 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:10.192 CC module/bdev/raid/bdev_raid.o 00:03:10.192 CC module/bdev/raid/bdev_raid_sb.o 00:03:10.192 CC module/bdev/raid/bdev_raid_rpc.o 00:03:10.192 CC module/bdev/raid/raid1.o 00:03:10.192 CC module/bdev/raid/raid0.o 00:03:10.192 CC module/bdev/raid/concat.o 00:03:10.192 CC module/bdev/ftl/bdev_ftl.o 00:03:10.192 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:10.192 CC module/bdev/aio/bdev_aio.o 00:03:10.192 CC module/bdev/aio/bdev_aio_rpc.o 00:03:10.192 LIB libspdk_accel_dpdk_cryptodev.a 00:03:10.192 LIB libspdk_accel_dpdk_compressdev.a 00:03:10.192 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:03:10.192 SO libspdk_accel_dpdk_compressdev.so.3.0 00:03:10.192 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:03:10.451 SYMLINK libspdk_accel_dpdk_compressdev.so 00:03:10.451 LIB libspdk_blobfs_bdev.a 00:03:10.451 LIB libspdk_bdev_split.a 00:03:10.451 SO libspdk_blobfs_bdev.so.6.0 00:03:10.451 LIB libspdk_bdev_null.a 00:03:10.451 SO libspdk_bdev_split.so.6.0 00:03:10.451 SO libspdk_bdev_null.so.6.0 00:03:10.451 LIB libspdk_bdev_error.a 00:03:10.451 LIB libspdk_bdev_gpt.a 00:03:10.451 SYMLINK libspdk_blobfs_bdev.so 00:03:10.451 SO libspdk_bdev_error.so.6.0 00:03:10.451 SYMLINK libspdk_bdev_split.so 00:03:10.451 LIB libspdk_bdev_ftl.a 00:03:10.451 LIB libspdk_bdev_passthru.a 00:03:10.451 SO libspdk_bdev_gpt.so.6.0 00:03:10.451 SYMLINK libspdk_bdev_null.so 00:03:10.451 LIB libspdk_bdev_zone_block.a 00:03:10.451 LIB libspdk_bdev_malloc.a 00:03:10.451 LIB libspdk_bdev_crypto.a 00:03:10.451 SO libspdk_bdev_ftl.so.6.0 00:03:10.451 SO libspdk_bdev_zone_block.so.6.0 00:03:10.451 SO libspdk_bdev_passthru.so.6.0 00:03:10.451 LIB libspdk_bdev_iscsi.a 00:03:10.451 LIB libspdk_bdev_aio.a 00:03:10.451 SO libspdk_bdev_malloc.so.6.0 00:03:10.451 SYMLINK libspdk_bdev_error.so 00:03:10.451 SYMLINK libspdk_bdev_gpt.so 00:03:10.451 LIB libspdk_bdev_compress.a 00:03:10.451 SO libspdk_bdev_crypto.so.6.0 00:03:10.451 SO libspdk_bdev_iscsi.so.6.0 00:03:10.710 SO libspdk_bdev_aio.so.6.0 00:03:10.710 SYMLINK libspdk_bdev_ftl.so 00:03:10.710 SO libspdk_bdev_compress.so.6.0 00:03:10.710 SYMLINK libspdk_bdev_zone_block.so 00:03:10.710 SYMLINK libspdk_bdev_passthru.so 00:03:10.710 SYMLINK libspdk_bdev_malloc.so 00:03:10.710 SYMLINK libspdk_bdev_crypto.so 00:03:10.710 SYMLINK libspdk_bdev_iscsi.so 00:03:10.710 LIB libspdk_bdev_lvol.a 00:03:10.710 SYMLINK libspdk_bdev_aio.so 00:03:10.710 SYMLINK libspdk_bdev_compress.so 00:03:10.710 LIB libspdk_bdev_virtio.a 00:03:10.710 SO libspdk_bdev_lvol.so.6.0 00:03:10.710 SO libspdk_bdev_virtio.so.6.0 00:03:10.710 SYMLINK libspdk_bdev_lvol.so 00:03:10.710 SYMLINK libspdk_bdev_virtio.so 00:03:10.969 LIB libspdk_bdev_delay.a 00:03:10.969 SO libspdk_bdev_delay.so.6.0 00:03:10.969 SYMLINK libspdk_bdev_delay.so 00:03:11.229 LIB libspdk_bdev_raid.a 00:03:11.229 SO libspdk_bdev_raid.so.6.0 00:03:11.229 SYMLINK libspdk_bdev_raid.so 00:03:12.167 LIB libspdk_bdev_nvme.a 00:03:12.167 SO libspdk_bdev_nvme.so.7.0 00:03:12.427 SYMLINK libspdk_bdev_nvme.so 00:03:13.364 CC module/event/subsystems/keyring/keyring.o 00:03:13.364 CC module/event/subsystems/scheduler/scheduler.o 00:03:13.364 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:13.364 CC module/event/subsystems/vmd/vmd.o 00:03:13.364 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:13.364 CC module/event/subsystems/iobuf/iobuf.o 00:03:13.364 CC module/event/subsystems/sock/sock.o 00:03:13.364 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:13.364 LIB libspdk_event_keyring.a 00:03:13.364 LIB libspdk_event_scheduler.a 00:03:13.364 LIB libspdk_event_vhost_blk.a 00:03:13.364 LIB libspdk_event_vmd.a 00:03:13.364 SO libspdk_event_keyring.so.1.0 00:03:13.364 SO libspdk_event_scheduler.so.4.0 00:03:13.364 LIB libspdk_event_sock.a 00:03:13.364 LIB libspdk_event_iobuf.a 00:03:13.364 SO libspdk_event_vhost_blk.so.3.0 00:03:13.364 SO libspdk_event_vmd.so.6.0 00:03:13.364 SO libspdk_event_iobuf.so.3.0 00:03:13.364 SO libspdk_event_sock.so.5.0 00:03:13.364 SYMLINK libspdk_event_keyring.so 00:03:13.364 SYMLINK libspdk_event_scheduler.so 00:03:13.364 SYMLINK libspdk_event_vhost_blk.so 00:03:13.364 SYMLINK libspdk_event_sock.so 00:03:13.364 SYMLINK libspdk_event_vmd.so 00:03:13.364 SYMLINK libspdk_event_iobuf.so 00:03:13.932 CC module/event/subsystems/accel/accel.o 00:03:13.932 LIB libspdk_event_accel.a 00:03:13.932 SO libspdk_event_accel.so.6.0 00:03:14.191 SYMLINK libspdk_event_accel.so 00:03:14.449 CC module/event/subsystems/bdev/bdev.o 00:03:14.708 LIB libspdk_event_bdev.a 00:03:14.708 SO libspdk_event_bdev.so.6.0 00:03:14.708 SYMLINK libspdk_event_bdev.so 00:03:14.967 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:14.967 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:15.227 CC module/event/subsystems/nbd/nbd.o 00:03:15.227 CC module/event/subsystems/ublk/ublk.o 00:03:15.227 CC module/event/subsystems/scsi/scsi.o 00:03:15.227 LIB libspdk_event_nbd.a 00:03:15.227 LIB libspdk_event_ublk.a 00:03:15.227 SO libspdk_event_nbd.so.6.0 00:03:15.227 LIB libspdk_event_scsi.a 00:03:15.227 SO libspdk_event_ublk.so.3.0 00:03:15.227 LIB libspdk_event_nvmf.a 00:03:15.227 SO libspdk_event_scsi.so.6.0 00:03:15.227 SYMLINK libspdk_event_nbd.so 00:03:15.486 SO libspdk_event_nvmf.so.6.0 00:03:15.486 SYMLINK libspdk_event_ublk.so 00:03:15.486 SYMLINK libspdk_event_scsi.so 00:03:15.486 SYMLINK libspdk_event_nvmf.so 00:03:15.745 CC module/event/subsystems/iscsi/iscsi.o 00:03:15.745 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:16.004 LIB libspdk_event_vhost_scsi.a 00:03:16.004 LIB libspdk_event_iscsi.a 00:03:16.004 SO libspdk_event_vhost_scsi.so.3.0 00:03:16.004 SO libspdk_event_iscsi.so.6.0 00:03:16.004 SYMLINK libspdk_event_vhost_scsi.so 00:03:16.004 SYMLINK libspdk_event_iscsi.so 00:03:16.264 SO libspdk.so.6.0 00:03:16.264 SYMLINK libspdk.so 00:03:16.840 CXX app/trace/trace.o 00:03:16.840 CC app/spdk_nvme_perf/perf.o 00:03:16.840 CC app/spdk_lspci/spdk_lspci.o 00:03:16.840 CC app/spdk_nvme_discover/discovery_aer.o 00:03:16.840 CC app/spdk_nvme_identify/identify.o 00:03:16.840 CC test/rpc_client/rpc_client_test.o 00:03:16.840 TEST_HEADER include/spdk/accel.h 00:03:16.840 TEST_HEADER include/spdk/accel_module.h 00:03:16.840 TEST_HEADER include/spdk/assert.h 00:03:16.840 CC app/trace_record/trace_record.o 00:03:16.840 TEST_HEADER include/spdk/barrier.h 00:03:16.840 TEST_HEADER include/spdk/base64.h 00:03:16.840 TEST_HEADER include/spdk/bdev_module.h 00:03:16.840 TEST_HEADER include/spdk/bdev.h 00:03:16.840 TEST_HEADER include/spdk/bdev_zone.h 00:03:16.840 CC app/spdk_top/spdk_top.o 00:03:16.840 TEST_HEADER include/spdk/bit_array.h 00:03:16.840 TEST_HEADER include/spdk/bit_pool.h 00:03:16.840 TEST_HEADER include/spdk/blob_bdev.h 00:03:16.840 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:16.840 TEST_HEADER include/spdk/blobfs.h 00:03:16.840 TEST_HEADER include/spdk/blob.h 00:03:16.840 TEST_HEADER include/spdk/config.h 00:03:16.840 TEST_HEADER include/spdk/conf.h 00:03:16.840 TEST_HEADER include/spdk/cpuset.h 00:03:16.840 TEST_HEADER include/spdk/crc16.h 00:03:16.840 TEST_HEADER include/spdk/crc64.h 00:03:16.840 TEST_HEADER include/spdk/crc32.h 00:03:16.840 TEST_HEADER include/spdk/dif.h 00:03:16.840 TEST_HEADER include/spdk/dma.h 00:03:16.840 TEST_HEADER include/spdk/endian.h 00:03:16.840 TEST_HEADER include/spdk/env.h 00:03:16.840 TEST_HEADER include/spdk/env_dpdk.h 00:03:16.840 TEST_HEADER include/spdk/event.h 00:03:16.840 TEST_HEADER include/spdk/fd_group.h 00:03:16.840 TEST_HEADER include/spdk/ftl.h 00:03:16.840 TEST_HEADER include/spdk/fd.h 00:03:16.840 TEST_HEADER include/spdk/gpt_spec.h 00:03:16.840 TEST_HEADER include/spdk/file.h 00:03:16.840 TEST_HEADER include/spdk/hexlify.h 00:03:16.840 TEST_HEADER include/spdk/histogram_data.h 00:03:16.840 TEST_HEADER include/spdk/idxd_spec.h 00:03:16.840 TEST_HEADER include/spdk/ioat.h 00:03:16.840 TEST_HEADER include/spdk/idxd.h 00:03:16.840 TEST_HEADER include/spdk/iscsi_spec.h 00:03:16.840 TEST_HEADER include/spdk/init.h 00:03:16.840 TEST_HEADER include/spdk/ioat_spec.h 00:03:16.840 TEST_HEADER include/spdk/json.h 00:03:16.840 TEST_HEADER include/spdk/jsonrpc.h 00:03:16.840 TEST_HEADER include/spdk/keyring_module.h 00:03:16.840 TEST_HEADER include/spdk/keyring.h 00:03:16.840 TEST_HEADER include/spdk/likely.h 00:03:16.840 CC app/iscsi_tgt/iscsi_tgt.o 00:03:16.840 TEST_HEADER include/spdk/log.h 00:03:16.840 TEST_HEADER include/spdk/lvol.h 00:03:16.840 TEST_HEADER include/spdk/memory.h 00:03:16.840 TEST_HEADER include/spdk/mmio.h 00:03:16.840 TEST_HEADER include/spdk/nbd.h 00:03:16.840 TEST_HEADER include/spdk/notify.h 00:03:16.840 CC app/spdk_dd/spdk_dd.o 00:03:16.840 TEST_HEADER include/spdk/net.h 00:03:16.840 TEST_HEADER include/spdk/nvme.h 00:03:16.840 TEST_HEADER include/spdk/nvme_intel.h 00:03:16.840 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:16.840 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:16.840 TEST_HEADER include/spdk/nvme_spec.h 00:03:16.840 TEST_HEADER include/spdk/nvme_zns.h 00:03:16.840 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:16.840 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:16.840 TEST_HEADER include/spdk/nvmf_spec.h 00:03:16.840 TEST_HEADER include/spdk/nvmf.h 00:03:16.841 TEST_HEADER include/spdk/nvmf_transport.h 00:03:16.841 TEST_HEADER include/spdk/opal.h 00:03:16.841 CC app/nvmf_tgt/nvmf_main.o 00:03:16.841 TEST_HEADER include/spdk/opal_spec.h 00:03:16.841 TEST_HEADER include/spdk/pci_ids.h 00:03:16.841 TEST_HEADER include/spdk/queue.h 00:03:16.841 TEST_HEADER include/spdk/pipe.h 00:03:16.841 TEST_HEADER include/spdk/reduce.h 00:03:16.841 TEST_HEADER include/spdk/scheduler.h 00:03:16.841 TEST_HEADER include/spdk/rpc.h 00:03:16.841 TEST_HEADER include/spdk/scsi.h 00:03:16.841 TEST_HEADER include/spdk/scsi_spec.h 00:03:16.841 TEST_HEADER include/spdk/stdinc.h 00:03:16.841 TEST_HEADER include/spdk/sock.h 00:03:16.841 TEST_HEADER include/spdk/thread.h 00:03:16.841 CC app/spdk_tgt/spdk_tgt.o 00:03:16.841 TEST_HEADER include/spdk/string.h 00:03:16.841 TEST_HEADER include/spdk/trace.h 00:03:16.841 TEST_HEADER include/spdk/trace_parser.h 00:03:16.841 TEST_HEADER include/spdk/ublk.h 00:03:16.841 TEST_HEADER include/spdk/tree.h 00:03:16.841 TEST_HEADER include/spdk/util.h 00:03:16.841 TEST_HEADER include/spdk/uuid.h 00:03:16.841 TEST_HEADER include/spdk/version.h 00:03:16.841 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:16.841 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:16.841 TEST_HEADER include/spdk/vhost.h 00:03:16.841 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:16.841 TEST_HEADER include/spdk/xor.h 00:03:16.841 TEST_HEADER include/spdk/vmd.h 00:03:16.841 TEST_HEADER include/spdk/zipf.h 00:03:16.841 CXX test/cpp_headers/accel.o 00:03:16.841 CXX test/cpp_headers/accel_module.o 00:03:16.841 CXX test/cpp_headers/assert.o 00:03:16.841 CXX test/cpp_headers/barrier.o 00:03:16.841 CXX test/cpp_headers/base64.o 00:03:16.841 CXX test/cpp_headers/bdev_module.o 00:03:16.841 CXX test/cpp_headers/bdev.o 00:03:16.841 CXX test/cpp_headers/bit_array.o 00:03:16.841 CXX test/cpp_headers/bdev_zone.o 00:03:16.841 CXX test/cpp_headers/bit_pool.o 00:03:16.841 CXX test/cpp_headers/blobfs_bdev.o 00:03:16.841 CXX test/cpp_headers/blob_bdev.o 00:03:16.841 CXX test/cpp_headers/blob.o 00:03:16.841 CXX test/cpp_headers/conf.o 00:03:16.841 CXX test/cpp_headers/blobfs.o 00:03:16.841 CXX test/cpp_headers/config.o 00:03:16.841 CXX test/cpp_headers/crc16.o 00:03:16.841 CXX test/cpp_headers/cpuset.o 00:03:16.841 CXX test/cpp_headers/crc64.o 00:03:16.841 CXX test/cpp_headers/crc32.o 00:03:16.841 CXX test/cpp_headers/dma.o 00:03:16.841 CXX test/cpp_headers/dif.o 00:03:16.841 CXX test/cpp_headers/env_dpdk.o 00:03:16.841 CXX test/cpp_headers/endian.o 00:03:16.841 CXX test/cpp_headers/env.o 00:03:16.841 CXX test/cpp_headers/event.o 00:03:16.841 CXX test/cpp_headers/fd_group.o 00:03:16.841 CXX test/cpp_headers/file.o 00:03:16.841 CXX test/cpp_headers/fd.o 00:03:16.841 CXX test/cpp_headers/gpt_spec.o 00:03:16.841 CXX test/cpp_headers/ftl.o 00:03:16.841 CXX test/cpp_headers/histogram_data.o 00:03:16.841 CXX test/cpp_headers/hexlify.o 00:03:16.841 CXX test/cpp_headers/idxd.o 00:03:16.841 CXX test/cpp_headers/idxd_spec.o 00:03:16.841 CXX test/cpp_headers/init.o 00:03:16.841 CXX test/cpp_headers/iscsi_spec.o 00:03:16.841 CXX test/cpp_headers/ioat.o 00:03:16.841 CXX test/cpp_headers/ioat_spec.o 00:03:16.841 CXX test/cpp_headers/json.o 00:03:16.841 CXX test/cpp_headers/jsonrpc.o 00:03:16.841 CXX test/cpp_headers/keyring.o 00:03:16.841 CXX test/cpp_headers/keyring_module.o 00:03:16.841 CXX test/cpp_headers/likely.o 00:03:16.841 CXX test/cpp_headers/log.o 00:03:16.841 CXX test/cpp_headers/lvol.o 00:03:16.841 CXX test/cpp_headers/mmio.o 00:03:16.841 CXX test/cpp_headers/memory.o 00:03:16.841 CXX test/cpp_headers/nbd.o 00:03:16.841 CXX test/cpp_headers/notify.o 00:03:16.841 CXX test/cpp_headers/net.o 00:03:16.841 CXX test/cpp_headers/nvme.o 00:03:16.841 CXX test/cpp_headers/nvme_intel.o 00:03:16.841 CC examples/ioat/verify/verify.o 00:03:16.841 CXX test/cpp_headers/nvme_ocssd.o 00:03:16.841 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:16.841 CXX test/cpp_headers/nvmf_cmd.o 00:03:16.841 CXX test/cpp_headers/nvme_spec.o 00:03:16.841 CXX test/cpp_headers/nvme_zns.o 00:03:16.841 CC examples/util/zipf/zipf.o 00:03:16.841 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:16.841 CXX test/cpp_headers/nvmf.o 00:03:16.841 CXX test/cpp_headers/nvmf_spec.o 00:03:16.841 CXX test/cpp_headers/nvmf_transport.o 00:03:16.841 CXX test/cpp_headers/opal.o 00:03:16.841 CXX test/cpp_headers/pci_ids.o 00:03:16.841 CXX test/cpp_headers/opal_spec.o 00:03:16.841 CXX test/cpp_headers/pipe.o 00:03:16.841 CXX test/cpp_headers/queue.o 00:03:16.841 CXX test/cpp_headers/reduce.o 00:03:16.841 CC test/thread/poller_perf/poller_perf.o 00:03:16.841 CXX test/cpp_headers/rpc.o 00:03:16.841 CC test/env/pci/pci_ut.o 00:03:16.841 CXX test/cpp_headers/scheduler.o 00:03:16.841 CXX test/cpp_headers/scsi.o 00:03:16.841 CXX test/cpp_headers/scsi_spec.o 00:03:16.841 CXX test/cpp_headers/sock.o 00:03:16.841 CXX test/cpp_headers/stdinc.o 00:03:16.841 CXX test/cpp_headers/string.o 00:03:16.841 CXX test/cpp_headers/thread.o 00:03:16.841 CXX test/cpp_headers/trace.o 00:03:16.841 CC test/app/histogram_perf/histogram_perf.o 00:03:16.841 CXX test/cpp_headers/trace_parser.o 00:03:16.841 CXX test/cpp_headers/tree.o 00:03:16.841 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:16.841 CXX test/cpp_headers/ublk.o 00:03:16.841 CXX test/cpp_headers/util.o 00:03:16.841 CC examples/ioat/perf/perf.o 00:03:16.841 CXX test/cpp_headers/uuid.o 00:03:16.841 CXX test/cpp_headers/version.o 00:03:16.841 CC test/app/stub/stub.o 00:03:16.841 CC test/env/vtophys/vtophys.o 00:03:16.841 CC test/env/memory/memory_ut.o 00:03:16.841 CC app/fio/nvme/fio_plugin.o 00:03:16.841 LINK spdk_lspci 00:03:17.159 CC test/app/jsoncat/jsoncat.o 00:03:17.159 CC test/dma/test_dma/test_dma.o 00:03:17.159 CXX test/cpp_headers/vfio_user_pci.o 00:03:17.159 CC test/app/bdev_svc/bdev_svc.o 00:03:17.159 CC app/fio/bdev/fio_plugin.o 00:03:17.159 CXX test/cpp_headers/vfio_user_spec.o 00:03:17.159 LINK rpc_client_test 00:03:17.478 LINK spdk_trace_record 00:03:17.478 LINK spdk_nvme_discover 00:03:17.478 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.478 LINK nvmf_tgt 00:03:17.737 LINK iscsi_tgt 00:03:17.737 LINK interrupt_tgt 00:03:17.737 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:17.737 LINK poller_perf 00:03:17.737 LINK histogram_perf 00:03:17.737 CXX test/cpp_headers/vhost.o 00:03:17.737 LINK jsoncat 00:03:17.737 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:17.737 CXX test/cpp_headers/vmd.o 00:03:17.737 LINK zipf 00:03:17.737 CXX test/cpp_headers/xor.o 00:03:17.737 CXX test/cpp_headers/zipf.o 00:03:17.737 LINK spdk_tgt 00:03:17.737 LINK vtophys 00:03:17.737 LINK verify 00:03:17.737 LINK env_dpdk_post_init 00:03:17.737 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:17.737 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:17.737 LINK stub 00:03:17.737 LINK ioat_perf 00:03:17.737 LINK bdev_svc 00:03:17.737 LINK spdk_dd 00:03:17.996 LINK spdk_trace 00:03:17.996 LINK pci_ut 00:03:17.996 LINK test_dma 00:03:18.255 LINK nvme_fuzz 00:03:18.255 LINK spdk_bdev 00:03:18.255 LINK vhost_fuzz 00:03:18.255 LINK spdk_nvme 00:03:18.255 CC examples/vmd/lsvmd/lsvmd.o 00:03:18.255 CC examples/vmd/led/led.o 00:03:18.255 CC examples/sock/hello_world/hello_sock.o 00:03:18.255 LINK spdk_nvme_identify 00:03:18.255 CC examples/idxd/perf/perf.o 00:03:18.255 LINK spdk_nvme_perf 00:03:18.255 LINK mem_callbacks 00:03:18.255 CC examples/thread/thread/thread_ex.o 00:03:18.255 CC test/event/reactor/reactor.o 00:03:18.255 CC test/event/reactor_perf/reactor_perf.o 00:03:18.255 CC test/event/event_perf/event_perf.o 00:03:18.255 LINK spdk_top 00:03:18.255 CC test/event/app_repeat/app_repeat.o 00:03:18.255 CC app/vhost/vhost.o 00:03:18.255 CC test/event/scheduler/scheduler.o 00:03:18.514 LINK lsvmd 00:03:18.514 LINK led 00:03:18.514 CC test/nvme/aer/aer.o 00:03:18.514 LINK reactor 00:03:18.514 CC test/nvme/connect_stress/connect_stress.o 00:03:18.514 CC test/nvme/fused_ordering/fused_ordering.o 00:03:18.514 LINK reactor_perf 00:03:18.514 CC test/nvme/sgl/sgl.o 00:03:18.514 CC test/nvme/boot_partition/boot_partition.o 00:03:18.514 CC test/nvme/cuse/cuse.o 00:03:18.514 CC test/nvme/e2edp/nvme_dp.o 00:03:18.514 CC test/nvme/overhead/overhead.o 00:03:18.514 CC test/nvme/reset/reset.o 00:03:18.514 LINK event_perf 00:03:18.514 CC test/nvme/compliance/nvme_compliance.o 00:03:18.514 CC test/nvme/err_injection/err_injection.o 00:03:18.514 CC test/nvme/simple_copy/simple_copy.o 00:03:18.514 CC test/nvme/startup/startup.o 00:03:18.514 LINK hello_sock 00:03:18.514 CC test/nvme/reserve/reserve.o 00:03:18.514 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:18.514 LINK app_repeat 00:03:18.514 CC test/blobfs/mkfs/mkfs.o 00:03:18.514 CC test/nvme/fdp/fdp.o 00:03:18.514 CC test/accel/dif/dif.o 00:03:18.514 LINK vhost 00:03:18.514 LINK thread 00:03:18.773 LINK idxd_perf 00:03:18.773 LINK memory_ut 00:03:18.773 LINK boot_partition 00:03:18.773 LINK connect_stress 00:03:18.773 LINK scheduler 00:03:18.773 CC test/lvol/esnap/esnap.o 00:03:18.773 LINK startup 00:03:18.773 LINK err_injection 00:03:18.773 LINK fused_ordering 00:03:18.773 LINK doorbell_aers 00:03:18.773 LINK sgl 00:03:18.773 LINK mkfs 00:03:18.773 LINK reserve 00:03:18.773 LINK simple_copy 00:03:18.773 LINK aer 00:03:18.773 LINK reset 00:03:18.773 LINK overhead 00:03:18.773 LINK nvme_dp 00:03:18.773 LINK nvme_compliance 00:03:19.031 LINK fdp 00:03:19.031 CC examples/nvme/arbitration/arbitration.o 00:03:19.031 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:19.031 CC examples/nvme/reconnect/reconnect.o 00:03:19.031 CC examples/nvme/abort/abort.o 00:03:19.031 LINK dif 00:03:19.031 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:19.031 CC examples/nvme/hello_world/hello_world.o 00:03:19.031 CC examples/nvme/hotplug/hotplug.o 00:03:19.031 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:19.289 CC examples/accel/perf/accel_perf.o 00:03:19.289 LINK pmr_persistence 00:03:19.289 CC examples/blob/cli/blobcli.o 00:03:19.289 CC examples/blob/hello_world/hello_blob.o 00:03:19.289 LINK cmb_copy 00:03:19.289 LINK iscsi_fuzz 00:03:19.289 LINK hello_world 00:03:19.289 LINK hotplug 00:03:19.289 LINK arbitration 00:03:19.547 LINK reconnect 00:03:19.547 LINK abort 00:03:19.547 LINK nvme_manage 00:03:19.547 LINK hello_blob 00:03:19.805 CC test/bdev/bdevio/bdevio.o 00:03:19.805 LINK accel_perf 00:03:19.805 LINK cuse 00:03:19.805 LINK blobcli 00:03:20.063 LINK bdevio 00:03:20.321 CC examples/bdev/hello_world/hello_bdev.o 00:03:20.321 CC examples/bdev/bdevperf/bdevperf.o 00:03:20.580 LINK hello_bdev 00:03:21.146 LINK bdevperf 00:03:21.712 CC examples/nvmf/nvmf/nvmf.o 00:03:21.970 LINK nvmf 00:03:23.345 LINK esnap 00:03:23.911 00:03:23.911 real 1m27.474s 00:03:23.911 user 15m35.775s 00:03:23.911 sys 5m32.008s 00:03:23.911 13:03:04 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:23.911 13:03:04 make -- common/autotest_common.sh@10 -- $ set +x 00:03:23.911 ************************************ 00:03:23.911 END TEST make 00:03:23.911 ************************************ 00:03:23.911 13:03:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:23.911 13:03:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:23.911 13:03:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:23.911 13:03:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.911 13:03:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:23.911 13:03:04 -- pm/common@44 -- $ pid=467129 00:03:23.911 13:03:04 -- pm/common@50 -- $ kill -TERM 467129 00:03:23.911 13:03:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.911 13:03:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:23.911 13:03:04 -- pm/common@44 -- $ pid=467132 00:03:23.911 13:03:04 -- pm/common@50 -- $ kill -TERM 467132 00:03:23.911 13:03:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.911 13:03:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:23.912 13:03:04 -- pm/common@44 -- $ pid=467134 00:03:23.912 13:03:04 -- pm/common@50 -- $ kill -TERM 467134 00:03:23.912 13:03:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.912 13:03:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:23.912 13:03:04 -- pm/common@44 -- $ pid=467155 00:03:23.912 13:03:04 -- pm/common@50 -- $ sudo -E kill -TERM 467155 00:03:23.912 13:03:04 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:03:23.912 13:03:04 -- nvmf/common.sh@7 -- # uname -s 00:03:23.912 13:03:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:23.912 13:03:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:23.912 13:03:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:23.912 13:03:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:23.912 13:03:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:23.912 13:03:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:23.912 13:03:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:23.912 13:03:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:23.912 13:03:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:23.912 13:03:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:23.912 13:03:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:03:23.912 13:03:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:03:23.912 13:03:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:23.912 13:03:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:23.912 13:03:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:23.912 13:03:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:23.912 13:03:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:03:23.912 13:03:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:23.912 13:03:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:23.912 13:03:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:23.912 13:03:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.912 13:03:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.912 13:03:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.912 13:03:04 -- paths/export.sh@5 -- # export PATH 00:03:23.912 13:03:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:23.912 13:03:04 -- nvmf/common.sh@47 -- # : 0 00:03:23.912 13:03:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:23.912 13:03:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:23.912 13:03:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:23.912 13:03:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:23.912 13:03:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:23.912 13:03:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:23.912 13:03:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:23.912 13:03:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:23.912 13:03:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:23.912 13:03:04 -- spdk/autotest.sh@32 -- # uname -s 00:03:23.912 13:03:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:23.912 13:03:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:23.912 13:03:04 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:03:23.912 13:03:04 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:23.912 13:03:04 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:03:23.912 13:03:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:23.912 13:03:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:23.912 13:03:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:23.912 13:03:04 -- spdk/autotest.sh@48 -- # udevadm_pid=538115 00:03:23.912 13:03:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:23.912 13:03:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:23.912 13:03:04 -- pm/common@17 -- # local monitor 00:03:23.912 13:03:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.912 13:03:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.912 13:03:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.912 13:03:04 -- pm/common@21 -- # date +%s 00:03:23.912 13:03:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.912 13:03:04 -- pm/common@21 -- # date +%s 00:03:23.912 13:03:04 -- pm/common@25 -- # sleep 1 00:03:23.912 13:03:04 -- pm/common@21 -- # date +%s 00:03:23.912 13:03:04 -- pm/common@21 -- # date +%s 00:03:23.912 13:03:04 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721991784 00:03:23.912 13:03:04 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721991784 00:03:23.912 13:03:04 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721991784 00:03:23.912 13:03:04 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721991784 00:03:24.171 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721991784_collect-cpu-load.pm.log 00:03:24.171 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721991784_collect-vmstat.pm.log 00:03:24.171 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721991784_collect-cpu-temp.pm.log 00:03:24.171 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721991784_collect-bmc-pm.bmc.pm.log 00:03:25.107 13:03:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:25.107 13:03:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:25.107 13:03:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:25.107 13:03:05 -- common/autotest_common.sh@10 -- # set +x 00:03:25.107 13:03:05 -- spdk/autotest.sh@59 -- # create_test_list 00:03:25.107 13:03:05 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:25.107 13:03:05 -- common/autotest_common.sh@10 -- # set +x 00:03:25.107 13:03:05 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:03:25.107 13:03:05 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:25.107 13:03:05 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:25.107 13:03:05 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:03:25.107 13:03:05 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:03:25.107 13:03:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:25.107 13:03:05 -- common/autotest_common.sh@1455 -- # uname 00:03:25.107 13:03:05 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:25.107 13:03:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:25.107 13:03:05 -- common/autotest_common.sh@1475 -- # uname 00:03:25.107 13:03:05 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:25.107 13:03:05 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:25.107 13:03:05 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:25.107 13:03:05 -- spdk/autotest.sh@72 -- # hash lcov 00:03:25.107 13:03:05 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:25.107 13:03:05 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:25.107 --rc lcov_branch_coverage=1 00:03:25.107 --rc lcov_function_coverage=1 00:03:25.107 --rc genhtml_branch_coverage=1 00:03:25.107 --rc genhtml_function_coverage=1 00:03:25.107 --rc genhtml_legend=1 00:03:25.107 --rc geninfo_all_blocks=1 00:03:25.107 ' 00:03:25.107 13:03:05 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:25.107 --rc lcov_branch_coverage=1 00:03:25.107 --rc lcov_function_coverage=1 00:03:25.107 --rc genhtml_branch_coverage=1 00:03:25.107 --rc genhtml_function_coverage=1 00:03:25.107 --rc genhtml_legend=1 00:03:25.107 --rc geninfo_all_blocks=1 00:03:25.107 ' 00:03:25.107 13:03:05 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:25.107 --rc lcov_branch_coverage=1 00:03:25.107 --rc lcov_function_coverage=1 00:03:25.107 --rc genhtml_branch_coverage=1 00:03:25.107 --rc genhtml_function_coverage=1 00:03:25.107 --rc genhtml_legend=1 00:03:25.107 --rc geninfo_all_blocks=1 00:03:25.107 --no-external' 00:03:25.107 13:03:05 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:25.107 --rc lcov_branch_coverage=1 00:03:25.107 --rc lcov_function_coverage=1 00:03:25.107 --rc genhtml_branch_coverage=1 00:03:25.107 --rc genhtml_function_coverage=1 00:03:25.107 --rc genhtml_legend=1 00:03:25.107 --rc geninfo_all_blocks=1 00:03:25.107 --no-external' 00:03:25.107 13:03:05 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:25.107 lcov: LCOV version 1.14 00:03:25.107 13:03:05 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:03:26.482 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:26.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:26.482 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:26.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:26.482 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:26.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:26.482 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:26.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:26.482 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:26.482 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:26.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:26.741 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:26.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:26.741 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:26.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:26.741 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:26.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:26.741 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:26.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:26.741 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:26.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:26.741 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:26.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:26.741 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:26.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:26.741 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:26.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:26.741 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:26.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:26.741 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:26.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:26.741 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:26.741 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:26.742 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:26.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:27.001 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:27.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:27.001 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:27.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:27.001 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:27.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:27.001 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:27.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:27.001 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:27.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:27.001 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:27.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:27.001 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:27.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:27.001 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:27.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:27.001 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:27.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:27.001 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:27.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:27.002 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:27.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:27.261 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:27.261 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:42.139 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:42.139 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:57.046 13:03:35 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:57.046 13:03:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:57.046 13:03:35 -- common/autotest_common.sh@10 -- # set +x 00:03:57.046 13:03:35 -- spdk/autotest.sh@91 -- # rm -f 00:03:57.046 13:03:35 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:58.947 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:58.947 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:58.947 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:58.947 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:58.947 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:58.947 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:58.947 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:58.947 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:58.947 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:59.205 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:59.205 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:59.205 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:59.205 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:59.205 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:59.205 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:59.205 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:59.205 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:59.205 13:03:39 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:59.205 13:03:39 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:59.205 13:03:39 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:59.205 13:03:39 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:59.205 13:03:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.205 13:03:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:59.205 13:03:39 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:59.205 13:03:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.205 13:03:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.205 13:03:39 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:59.205 13:03:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.205 13:03:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:59.205 13:03:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:59.205 13:03:39 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:59.205 13:03:39 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:59.464 No valid GPT data, bailing 00:03:59.464 13:03:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.464 13:03:39 -- scripts/common.sh@391 -- # pt= 00:03:59.464 13:03:39 -- scripts/common.sh@392 -- # return 1 00:03:59.464 13:03:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:59.464 1+0 records in 00:03:59.464 1+0 records out 00:03:59.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00608562 s, 172 MB/s 00:03:59.464 13:03:39 -- spdk/autotest.sh@118 -- # sync 00:03:59.464 13:03:39 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:59.464 13:03:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:59.464 13:03:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:07.583 13:03:46 -- spdk/autotest.sh@124 -- # uname -s 00:04:07.583 13:03:46 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:07.583 13:03:46 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:04:07.583 13:03:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.583 13:03:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.583 13:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:07.583 ************************************ 00:04:07.583 START TEST setup.sh 00:04:07.583 ************************************ 00:04:07.583 13:03:46 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:04:07.583 * Looking for test storage... 00:04:07.583 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:07.583 13:03:46 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:07.583 13:03:46 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:07.583 13:03:46 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:04:07.583 13:03:46 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.583 13:03:46 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.583 13:03:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.583 ************************************ 00:04:07.583 START TEST acl 00:04:07.583 ************************************ 00:04:07.583 13:03:46 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:04:07.583 * Looking for test storage... 00:04:07.583 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:07.583 13:03:47 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:07.583 13:03:47 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:07.583 13:03:47 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:07.583 13:03:47 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:07.583 13:03:47 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:07.583 13:03:47 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:07.583 13:03:47 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:07.584 13:03:47 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:07.584 13:03:47 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:07.584 13:03:47 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:07.584 13:03:47 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:07.584 13:03:47 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:07.584 13:03:47 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:07.584 13:03:47 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:07.584 13:03:47 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.584 13:03:47 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:11.776 13:03:51 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:11.776 13:03:51 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:11.776 13:03:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.776 13:03:51 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:11.777 13:03:51 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.777 13:03:51 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:15.968 Hugepages 00:04:15.968 node hugesize free / total 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 00:04:15.968 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.968 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:15.969 13:03:55 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:15.969 13:03:55 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.969 13:03:55 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.969 13:03:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:15.969 ************************************ 00:04:15.969 START TEST denied 00:04:15.969 ************************************ 00:04:15.969 13:03:56 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:15.969 13:03:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:04:15.969 13:03:56 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:15.969 13:03:56 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:04:15.969 13:03:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.969 13:03:56 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:20.160 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:04:20.160 13:04:00 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:04:20.160 13:04:00 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:20.160 13:04:00 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:20.160 13:04:00 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:04:20.160 13:04:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:04:20.160 13:04:00 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:20.160 13:04:00 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:20.160 13:04:00 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:20.160 13:04:00 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.160 13:04:00 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.492 00:04:25.492 real 0m9.803s 00:04:25.492 user 0m3.098s 00:04:25.492 sys 0m6.014s 00:04:25.492 13:04:05 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.492 13:04:05 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:25.492 ************************************ 00:04:25.492 END TEST denied 00:04:25.492 ************************************ 00:04:25.492 13:04:05 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:25.492 13:04:05 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.492 13:04:05 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.492 13:04:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:25.492 ************************************ 00:04:25.492 START TEST allowed 00:04:25.492 ************************************ 00:04:25.492 13:04:05 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:25.492 13:04:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:04:25.492 13:04:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:25.492 13:04:05 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:04:25.492 13:04:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.492 13:04:05 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:32.060 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:32.060 13:04:11 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:32.060 13:04:11 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:32.060 13:04:11 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:32.060 13:04:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.060 13:04:11 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.251 00:04:36.251 real 0m10.783s 00:04:36.251 user 0m2.995s 00:04:36.251 sys 0m6.012s 00:04:36.251 13:04:16 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.251 13:04:16 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:36.251 ************************************ 00:04:36.251 END TEST allowed 00:04:36.251 ************************************ 00:04:36.251 00:04:36.251 real 0m29.785s 00:04:36.251 user 0m9.299s 00:04:36.251 sys 0m18.336s 00:04:36.251 13:04:16 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.251 13:04:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:36.251 ************************************ 00:04:36.251 END TEST acl 00:04:36.251 ************************************ 00:04:36.511 13:04:16 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:04:36.511 13:04:16 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.511 13:04:16 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.511 13:04:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:36.511 ************************************ 00:04:36.511 START TEST hugepages 00:04:36.511 ************************************ 00:04:36.511 13:04:16 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:04:36.511 * Looking for test storage... 00:04:36.511 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41355564 kB' 'MemAvailable: 45349088 kB' 'Buffers: 6064 kB' 'Cached: 10561936 kB' 'SwapCached: 0 kB' 'Active: 7394204 kB' 'Inactive: 3689560 kB' 'Active(anon): 6995780 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519060 kB' 'Mapped: 181236 kB' 'Shmem: 6480016 kB' 'KReclaimable: 553248 kB' 'Slab: 1208968 kB' 'SReclaimable: 553248 kB' 'SUnreclaim: 655720 kB' 'KernelStack: 22272 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 8475528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219344 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.511 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:36.512 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.513 13:04:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.513 13:04:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.513 13:04:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.513 13:04:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.513 13:04:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.513 13:04:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:36.513 13:04:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:36.513 13:04:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:36.513 13:04:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:36.513 13:04:17 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.513 13:04:17 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.513 13:04:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.772 ************************************ 00:04:36.772 START TEST default_setup 00:04:36.772 ************************************ 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.772 13:04:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:40.969 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:40.969 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:42.878 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43541420 kB' 'MemAvailable: 47534304 kB' 'Buffers: 6064 kB' 'Cached: 10562076 kB' 'SwapCached: 0 kB' 'Active: 7410560 kB' 'Inactive: 3689560 kB' 'Active(anon): 7012136 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535500 kB' 'Mapped: 181412 kB' 'Shmem: 6480156 kB' 'KReclaimable: 552608 kB' 'Slab: 1206592 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653984 kB' 'KernelStack: 22272 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8490184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219232 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.878 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43541916 kB' 'MemAvailable: 47534800 kB' 'Buffers: 6064 kB' 'Cached: 10562076 kB' 'SwapCached: 0 kB' 'Active: 7410052 kB' 'Inactive: 3689560 kB' 'Active(anon): 7011628 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534888 kB' 'Mapped: 181344 kB' 'Shmem: 6480156 kB' 'KReclaimable: 552608 kB' 'Slab: 1206624 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 654016 kB' 'KernelStack: 22240 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8488600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219200 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.879 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43541700 kB' 'MemAvailable: 47534584 kB' 'Buffers: 6064 kB' 'Cached: 10562096 kB' 'SwapCached: 0 kB' 'Active: 7410200 kB' 'Inactive: 3689560 kB' 'Active(anon): 7011776 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535008 kB' 'Mapped: 181344 kB' 'Shmem: 6480176 kB' 'KReclaimable: 552608 kB' 'Slab: 1206624 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 654016 kB' 'KernelStack: 22256 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8490224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219232 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.880 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.881 nr_hugepages=1024 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.881 resv_hugepages=0 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.881 surplus_hugepages=0 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.881 anon_hugepages=0 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43540900 kB' 'MemAvailable: 47533784 kB' 'Buffers: 6064 kB' 'Cached: 10562116 kB' 'SwapCached: 0 kB' 'Active: 7410372 kB' 'Inactive: 3689560 kB' 'Active(anon): 7011948 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535196 kB' 'Mapped: 181344 kB' 'Shmem: 6480196 kB' 'KReclaimable: 552608 kB' 'Slab: 1206624 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 654016 kB' 'KernelStack: 22176 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8490244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219216 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.881 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25374144 kB' 'MemUsed: 7264996 kB' 'SwapCached: 0 kB' 'Active: 2857576 kB' 'Inactive: 231284 kB' 'Active(anon): 2724528 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2684740 kB' 'Mapped: 80328 kB' 'AnonPages: 407420 kB' 'Shmem: 2320408 kB' 'KernelStack: 11576 kB' 'PageTables: 5408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 220920 kB' 'Slab: 534968 kB' 'SReclaimable: 220920 kB' 'SUnreclaim: 314048 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.882 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:42.883 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.143 node0=1024 expecting 1024 00:04:43.143 13:04:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.143 00:04:43.143 real 0m6.364s 00:04:43.143 user 0m1.596s 00:04:43.144 sys 0m2.895s 00:04:43.144 13:04:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.144 13:04:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:43.144 ************************************ 00:04:43.144 END TEST default_setup 00:04:43.144 ************************************ 00:04:43.144 13:04:23 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:43.144 13:04:23 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.144 13:04:23 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.144 13:04:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.144 ************************************ 00:04:43.144 START TEST per_node_1G_alloc 00:04:43.144 ************************************ 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.144 13:04:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:47.332 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:47.332 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:47.332 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:47.332 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:47.332 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.332 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.332 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.332 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.332 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.332 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.332 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.332 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.332 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.332 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43543816 kB' 'MemAvailable: 47536700 kB' 'Buffers: 6064 kB' 'Cached: 10562232 kB' 'SwapCached: 0 kB' 'Active: 7409664 kB' 'Inactive: 3689560 kB' 'Active(anon): 7011240 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534156 kB' 'Mapped: 180168 kB' 'Shmem: 6480312 kB' 'KReclaimable: 552608 kB' 'Slab: 1206340 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653732 kB' 'KernelStack: 22096 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8482720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219312 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.333 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43544548 kB' 'MemAvailable: 47537432 kB' 'Buffers: 6064 kB' 'Cached: 10562236 kB' 'SwapCached: 0 kB' 'Active: 7410060 kB' 'Inactive: 3689560 kB' 'Active(anon): 7011636 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534632 kB' 'Mapped: 180196 kB' 'Shmem: 6480316 kB' 'KReclaimable: 552608 kB' 'Slab: 1206300 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653692 kB' 'KernelStack: 22208 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8484468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219296 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.334 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.335 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.336 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43545584 kB' 'MemAvailable: 47538468 kB' 'Buffers: 6064 kB' 'Cached: 10562236 kB' 'SwapCached: 0 kB' 'Active: 7409456 kB' 'Inactive: 3689560 kB' 'Active(anon): 7011032 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533992 kB' 'Mapped: 180196 kB' 'Shmem: 6480316 kB' 'KReclaimable: 552608 kB' 'Slab: 1206276 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653668 kB' 'KernelStack: 22064 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8484492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219264 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.337 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.338 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.339 nr_hugepages=1024 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.339 resv_hugepages=0 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.339 surplus_hugepages=0 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.339 anon_hugepages=0 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43548380 kB' 'MemAvailable: 47541264 kB' 'Buffers: 6064 kB' 'Cached: 10562280 kB' 'SwapCached: 0 kB' 'Active: 7409560 kB' 'Inactive: 3689560 kB' 'Active(anon): 7011136 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534016 kB' 'Mapped: 180196 kB' 'Shmem: 6480360 kB' 'KReclaimable: 552608 kB' 'Slab: 1206276 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653668 kB' 'KernelStack: 22176 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8484516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219344 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.339 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.340 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.600 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.601 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26435132 kB' 'MemUsed: 6204008 kB' 'SwapCached: 0 kB' 'Active: 2859160 kB' 'Inactive: 231284 kB' 'Active(anon): 2726112 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2684900 kB' 'Mapped: 80056 kB' 'AnonPages: 408780 kB' 'Shmem: 2320568 kB' 'KernelStack: 11528 kB' 'PageTables: 5288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 220920 kB' 'Slab: 534628 kB' 'SReclaimable: 220920 kB' 'SUnreclaim: 313708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.602 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 17108332 kB' 'MemUsed: 10547748 kB' 'SwapCached: 0 kB' 'Active: 4556272 kB' 'Inactive: 3458276 kB' 'Active(anon): 4290896 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7883464 kB' 'Mapped: 100660 kB' 'AnonPages: 131096 kB' 'Shmem: 4159812 kB' 'KernelStack: 10696 kB' 'PageTables: 3200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 331688 kB' 'Slab: 671640 kB' 'SReclaimable: 331688 kB' 'SUnreclaim: 339952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.603 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.604 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:47.605 node0=512 expecting 512 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:47.605 node1=512 expecting 512 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:47.605 00:04:47.605 real 0m4.430s 00:04:47.605 user 0m1.670s 00:04:47.605 sys 0m2.841s 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.605 13:04:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.605 ************************************ 00:04:47.605 END TEST per_node_1G_alloc 00:04:47.605 ************************************ 00:04:47.605 13:04:27 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:47.605 13:04:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.605 13:04:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.605 13:04:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.605 ************************************ 00:04:47.605 START TEST even_2G_alloc 00:04:47.605 ************************************ 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:47.605 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:47.606 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:47.606 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.606 13:04:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:51.829 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:51.829 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43536892 kB' 'MemAvailable: 47529776 kB' 'Buffers: 6064 kB' 'Cached: 10562404 kB' 'SwapCached: 0 kB' 'Active: 7409524 kB' 'Inactive: 3689560 kB' 'Active(anon): 7011100 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533432 kB' 'Mapped: 180316 kB' 'Shmem: 6480484 kB' 'KReclaimable: 552608 kB' 'Slab: 1206104 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653496 kB' 'KernelStack: 22048 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8482356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219184 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.829 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:31 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.830 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43537144 kB' 'MemAvailable: 47530028 kB' 'Buffers: 6064 kB' 'Cached: 10562416 kB' 'SwapCached: 0 kB' 'Active: 7409708 kB' 'Inactive: 3689560 kB' 'Active(anon): 7011284 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533584 kB' 'Mapped: 180288 kB' 'Shmem: 6480496 kB' 'KReclaimable: 552608 kB' 'Slab: 1206104 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653496 kB' 'KernelStack: 22080 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8482740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219152 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.831 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.832 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43537476 kB' 'MemAvailable: 47530360 kB' 'Buffers: 6064 kB' 'Cached: 10562416 kB' 'SwapCached: 0 kB' 'Active: 7408736 kB' 'Inactive: 3689560 kB' 'Active(anon): 7010312 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533112 kB' 'Mapped: 180212 kB' 'Shmem: 6480496 kB' 'KReclaimable: 552608 kB' 'Slab: 1206108 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653500 kB' 'KernelStack: 22096 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8482764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219152 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.833 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.834 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.835 nr_hugepages=1024 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.835 resv_hugepages=0 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.835 surplus_hugepages=0 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.835 anon_hugepages=0 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43538380 kB' 'MemAvailable: 47531264 kB' 'Buffers: 6064 kB' 'Cached: 10562456 kB' 'SwapCached: 0 kB' 'Active: 7408892 kB' 'Inactive: 3689560 kB' 'Active(anon): 7010468 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533228 kB' 'Mapped: 180212 kB' 'Shmem: 6480536 kB' 'KReclaimable: 552608 kB' 'Slab: 1206108 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653500 kB' 'KernelStack: 22064 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8482784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219152 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.835 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.836 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26429912 kB' 'MemUsed: 6209228 kB' 'SwapCached: 0 kB' 'Active: 2857612 kB' 'Inactive: 231284 kB' 'Active(anon): 2724564 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2685040 kB' 'Mapped: 79540 kB' 'AnonPages: 407056 kB' 'Shmem: 2320708 kB' 'KernelStack: 11496 kB' 'PageTables: 5272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 220920 kB' 'Slab: 534152 kB' 'SReclaimable: 220920 kB' 'SUnreclaim: 313232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.837 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.838 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 17109396 kB' 'MemUsed: 10546684 kB' 'SwapCached: 0 kB' 'Active: 4551264 kB' 'Inactive: 3458276 kB' 'Active(anon): 4285888 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7883504 kB' 'Mapped: 100672 kB' 'AnonPages: 126176 kB' 'Shmem: 4159852 kB' 'KernelStack: 10568 kB' 'PageTables: 3108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 331688 kB' 'Slab: 671956 kB' 'SReclaimable: 331688 kB' 'SUnreclaim: 340268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.839 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:51.840 node0=512 expecting 512 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:51.840 node1=512 expecting 512 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:51.840 00:04:51.840 real 0m4.165s 00:04:51.840 user 0m1.560s 00:04:51.840 sys 0m2.657s 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.840 13:04:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:51.840 ************************************ 00:04:51.840 END TEST even_2G_alloc 00:04:51.840 ************************************ 00:04:51.840 13:04:32 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:51.840 13:04:32 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.840 13:04:32 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.840 13:04:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:51.840 ************************************ 00:04:51.840 START TEST odd_alloc 00:04:51.840 ************************************ 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.840 13:04:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:56.033 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:56.033 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43528236 kB' 'MemAvailable: 47521120 kB' 'Buffers: 6064 kB' 'Cached: 10562576 kB' 'SwapCached: 0 kB' 'Active: 7410512 kB' 'Inactive: 3689560 kB' 'Active(anon): 7012088 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534096 kB' 'Mapped: 180308 kB' 'Shmem: 6480656 kB' 'KReclaimable: 552608 kB' 'Slab: 1205696 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653088 kB' 'KernelStack: 22080 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8483400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219440 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.033 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.034 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43529056 kB' 'MemAvailable: 47521940 kB' 'Buffers: 6064 kB' 'Cached: 10562576 kB' 'SwapCached: 0 kB' 'Active: 7409688 kB' 'Inactive: 3689560 kB' 'Active(anon): 7011264 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533776 kB' 'Mapped: 180224 kB' 'Shmem: 6480656 kB' 'KReclaimable: 552608 kB' 'Slab: 1205704 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653096 kB' 'KernelStack: 22080 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8483416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219440 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.035 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.036 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43528048 kB' 'MemAvailable: 47520932 kB' 'Buffers: 6064 kB' 'Cached: 10562596 kB' 'SwapCached: 0 kB' 'Active: 7409688 kB' 'Inactive: 3689560 kB' 'Active(anon): 7011264 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533772 kB' 'Mapped: 180224 kB' 'Shmem: 6480676 kB' 'KReclaimable: 552608 kB' 'Slab: 1205704 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653096 kB' 'KernelStack: 22080 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8483684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219440 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.037 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.038 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:56.039 nr_hugepages=1025 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.039 resv_hugepages=0 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.039 surplus_hugepages=0 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.039 anon_hugepages=0 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43527292 kB' 'MemAvailable: 47520176 kB' 'Buffers: 6064 kB' 'Cached: 10562616 kB' 'SwapCached: 0 kB' 'Active: 7410200 kB' 'Inactive: 3689560 kB' 'Active(anon): 7011776 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534284 kB' 'Mapped: 180728 kB' 'Shmem: 6480696 kB' 'KReclaimable: 552608 kB' 'Slab: 1205704 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653096 kB' 'KernelStack: 22080 kB' 'PageTables: 8380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8484684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219360 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.039 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.040 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26400500 kB' 'MemUsed: 6238640 kB' 'SwapCached: 0 kB' 'Active: 2862132 kB' 'Inactive: 231284 kB' 'Active(anon): 2729084 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2685108 kB' 'Mapped: 80044 kB' 'AnonPages: 411476 kB' 'Shmem: 2320776 kB' 'KernelStack: 11496 kB' 'PageTables: 5272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 220920 kB' 'Slab: 534036 kB' 'SReclaimable: 220920 kB' 'SUnreclaim: 313116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.041 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 17124128 kB' 'MemUsed: 10531952 kB' 'SwapCached: 0 kB' 'Active: 4552148 kB' 'Inactive: 3458276 kB' 'Active(anon): 4286772 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7883604 kB' 'Mapped: 100684 kB' 'AnonPages: 126944 kB' 'Shmem: 4159952 kB' 'KernelStack: 10600 kB' 'PageTables: 3164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 331688 kB' 'Slab: 671668 kB' 'SReclaimable: 331688 kB' 'SUnreclaim: 339980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.042 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.043 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.303 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:56.304 node0=512 expecting 513 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:56.304 node1=513 expecting 512 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:56.304 00:04:56.304 real 0m4.308s 00:04:56.304 user 0m1.588s 00:04:56.304 sys 0m2.801s 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.304 13:04:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:56.304 ************************************ 00:04:56.304 END TEST odd_alloc 00:04:56.304 ************************************ 00:04:56.304 13:04:36 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:56.304 13:04:36 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.304 13:04:36 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.304 13:04:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:56.304 ************************************ 00:04:56.304 START TEST custom_alloc 00:04:56.304 ************************************ 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.304 13:04:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:00.500 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:00.500 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42458296 kB' 'MemAvailable: 46451180 kB' 'Buffers: 6064 kB' 'Cached: 10562744 kB' 'SwapCached: 0 kB' 'Active: 7413424 kB' 'Inactive: 3689560 kB' 'Active(anon): 7015000 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536996 kB' 'Mapped: 180772 kB' 'Shmem: 6480824 kB' 'KReclaimable: 552608 kB' 'Slab: 1205844 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653236 kB' 'KernelStack: 22224 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8488988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219488 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.500 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.501 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42450312 kB' 'MemAvailable: 46443196 kB' 'Buffers: 6064 kB' 'Cached: 10562748 kB' 'SwapCached: 0 kB' 'Active: 7416864 kB' 'Inactive: 3689560 kB' 'Active(anon): 7018440 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542024 kB' 'Mapped: 180744 kB' 'Shmem: 6480828 kB' 'KReclaimable: 552608 kB' 'Slab: 1205896 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653288 kB' 'KernelStack: 22176 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8493504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219396 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.502 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.503 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42450080 kB' 'MemAvailable: 46442964 kB' 'Buffers: 6064 kB' 'Cached: 10562768 kB' 'SwapCached: 0 kB' 'Active: 7411604 kB' 'Inactive: 3689560 kB' 'Active(anon): 7013180 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535740 kB' 'Mapped: 180300 kB' 'Shmem: 6480848 kB' 'KReclaimable: 552608 kB' 'Slab: 1205896 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653288 kB' 'KernelStack: 21984 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8484544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219296 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:00.506 nr_hugepages=1536 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.506 resv_hugepages=0 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.506 surplus_hugepages=0 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.506 anon_hugepages=0 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42449800 kB' 'MemAvailable: 46442684 kB' 'Buffers: 6064 kB' 'Cached: 10562808 kB' 'SwapCached: 0 kB' 'Active: 7411304 kB' 'Inactive: 3689560 kB' 'Active(anon): 7012880 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535412 kB' 'Mapped: 180240 kB' 'Shmem: 6480888 kB' 'KReclaimable: 552608 kB' 'Slab: 1205992 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653384 kB' 'KernelStack: 22064 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8484564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219296 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.506 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.507 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26396656 kB' 'MemUsed: 6242484 kB' 'SwapCached: 0 kB' 'Active: 2857916 kB' 'Inactive: 231284 kB' 'Active(anon): 2724868 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2685204 kB' 'Mapped: 79544 kB' 'AnonPages: 407256 kB' 'Shmem: 2320872 kB' 'KernelStack: 11480 kB' 'PageTables: 5216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 220920 kB' 'Slab: 534444 kB' 'SReclaimable: 220920 kB' 'SUnreclaim: 313524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.508 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16053208 kB' 'MemUsed: 11602872 kB' 'SwapCached: 0 kB' 'Active: 4553424 kB' 'Inactive: 3458276 kB' 'Active(anon): 4288048 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7883688 kB' 'Mapped: 100696 kB' 'AnonPages: 128156 kB' 'Shmem: 4160036 kB' 'KernelStack: 10584 kB' 'PageTables: 3108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 331688 kB' 'Slab: 671548 kB' 'SReclaimable: 331688 kB' 'SUnreclaim: 339860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.509 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.510 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.769 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:00.770 node0=512 expecting 512 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:00.770 node1=1024 expecting 1024 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:00.770 00:05:00.770 real 0m4.382s 00:05:00.770 user 0m1.669s 00:05:00.770 sys 0m2.794s 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:00.770 13:04:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:00.770 ************************************ 00:05:00.770 END TEST custom_alloc 00:05:00.770 ************************************ 00:05:00.770 13:04:41 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:00.770 13:04:41 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:00.770 13:04:41 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:00.770 13:04:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.770 ************************************ 00:05:00.770 START TEST no_shrink_alloc 00:05:00.770 ************************************ 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.770 13:04:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:04.964 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:04.965 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43475008 kB' 'MemAvailable: 47467892 kB' 'Buffers: 6064 kB' 'Cached: 10562920 kB' 'SwapCached: 0 kB' 'Active: 7414680 kB' 'Inactive: 3689560 kB' 'Active(anon): 7016256 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538064 kB' 'Mapped: 180340 kB' 'Shmem: 6481000 kB' 'KReclaimable: 552608 kB' 'Slab: 1205796 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653188 kB' 'KernelStack: 22304 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8488100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219536 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.965 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43477224 kB' 'MemAvailable: 47470108 kB' 'Buffers: 6064 kB' 'Cached: 10562920 kB' 'SwapCached: 0 kB' 'Active: 7413536 kB' 'Inactive: 3689560 kB' 'Active(anon): 7015112 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536872 kB' 'Mapped: 180324 kB' 'Shmem: 6481000 kB' 'KReclaimable: 552608 kB' 'Slab: 1205684 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653076 kB' 'KernelStack: 22080 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8488368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219456 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:05:04.966 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.967 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43477152 kB' 'MemAvailable: 47470036 kB' 'Buffers: 6064 kB' 'Cached: 10562940 kB' 'SwapCached: 0 kB' 'Active: 7414072 kB' 'Inactive: 3689560 kB' 'Active(anon): 7015648 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537436 kB' 'Mapped: 180324 kB' 'Shmem: 6481020 kB' 'KReclaimable: 552608 kB' 'Slab: 1206132 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653524 kB' 'KernelStack: 22304 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8486772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219456 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.968 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.969 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:04.970 nr_hugepages=1024 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.970 resv_hugepages=0 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.970 surplus_hugepages=0 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.970 anon_hugepages=0 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.970 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43477364 kB' 'MemAvailable: 47470248 kB' 'Buffers: 6064 kB' 'Cached: 10562964 kB' 'SwapCached: 0 kB' 'Active: 7413944 kB' 'Inactive: 3689560 kB' 'Active(anon): 7015520 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537304 kB' 'Mapped: 180324 kB' 'Shmem: 6481044 kB' 'KReclaimable: 552608 kB' 'Slab: 1206132 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653524 kB' 'KernelStack: 22304 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8488412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219472 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.971 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.972 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25360468 kB' 'MemUsed: 7278672 kB' 'SwapCached: 0 kB' 'Active: 2859200 kB' 'Inactive: 231284 kB' 'Active(anon): 2726152 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2685300 kB' 'Mapped: 79540 kB' 'AnonPages: 408340 kB' 'Shmem: 2320968 kB' 'KernelStack: 11592 kB' 'PageTables: 5508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 220920 kB' 'Slab: 534544 kB' 'SReclaimable: 220920 kB' 'SUnreclaim: 313624 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.973 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:04.974 node0=1024 expecting 1024 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.974 13:04:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:09.170 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:05:09.170 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:09.170 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43456796 kB' 'MemAvailable: 47449680 kB' 'Buffers: 6064 kB' 'Cached: 10563064 kB' 'SwapCached: 0 kB' 'Active: 7414364 kB' 'Inactive: 3689560 kB' 'Active(anon): 7015940 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537488 kB' 'Mapped: 180344 kB' 'Shmem: 6481144 kB' 'KReclaimable: 552608 kB' 'Slab: 1206200 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653592 kB' 'KernelStack: 22096 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8486144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219440 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.170 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.171 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43458008 kB' 'MemAvailable: 47450892 kB' 'Buffers: 6064 kB' 'Cached: 10563068 kB' 'SwapCached: 0 kB' 'Active: 7413504 kB' 'Inactive: 3689560 kB' 'Active(anon): 7015080 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537116 kB' 'Mapped: 180256 kB' 'Shmem: 6481148 kB' 'KReclaimable: 552608 kB' 'Slab: 1206184 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653576 kB' 'KernelStack: 22080 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8486164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219408 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.172 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.173 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43458876 kB' 'MemAvailable: 47451760 kB' 'Buffers: 6064 kB' 'Cached: 10563068 kB' 'SwapCached: 0 kB' 'Active: 7413216 kB' 'Inactive: 3689560 kB' 'Active(anon): 7014792 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536832 kB' 'Mapped: 180256 kB' 'Shmem: 6481148 kB' 'KReclaimable: 552608 kB' 'Slab: 1206184 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653576 kB' 'KernelStack: 22080 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8486184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219408 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.174 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.175 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:09.176 nr_hugepages=1024 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:09.176 resv_hugepages=0 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:09.176 surplus_hugepages=0 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:09.176 anon_hugepages=0 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43459380 kB' 'MemAvailable: 47452264 kB' 'Buffers: 6064 kB' 'Cached: 10563068 kB' 'SwapCached: 0 kB' 'Active: 7413720 kB' 'Inactive: 3689560 kB' 'Active(anon): 7015296 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537336 kB' 'Mapped: 180256 kB' 'Shmem: 6481148 kB' 'KReclaimable: 552608 kB' 'Slab: 1206184 kB' 'SReclaimable: 552608 kB' 'SUnreclaim: 653576 kB' 'KernelStack: 22080 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8486208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219408 kB' 'VmallocChunk: 0 kB' 'Percpu: 117376 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3476852 kB' 'DirectMap2M: 19277824 kB' 'DirectMap1G: 46137344 kB' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.176 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.177 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25354760 kB' 'MemUsed: 7284380 kB' 'SwapCached: 0 kB' 'Active: 2858744 kB' 'Inactive: 231284 kB' 'Active(anon): 2725696 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2685460 kB' 'Mapped: 79544 kB' 'AnonPages: 407704 kB' 'Shmem: 2321128 kB' 'KernelStack: 11496 kB' 'PageTables: 5272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 220920 kB' 'Slab: 534748 kB' 'SReclaimable: 220920 kB' 'SUnreclaim: 313828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.178 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:09.179 node0=1024 expecting 1024 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:09.179 00:05:09.179 real 0m8.296s 00:05:09.179 user 0m2.901s 00:05:09.179 sys 0m5.489s 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.179 13:04:49 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:09.179 ************************************ 00:05:09.179 END TEST no_shrink_alloc 00:05:09.179 ************************************ 00:05:09.179 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:09.179 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:09.179 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:09.179 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.179 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.179 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.179 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.179 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:09.179 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.179 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.180 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:09.180 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:09.180 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:09.180 13:04:49 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:09.180 00:05:09.180 real 0m32.639s 00:05:09.180 user 0m11.239s 00:05:09.180 sys 0m19.969s 00:05:09.180 13:04:49 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.180 13:04:49 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:09.180 ************************************ 00:05:09.180 END TEST hugepages 00:05:09.180 ************************************ 00:05:09.180 13:04:49 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:05:09.180 13:04:49 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.180 13:04:49 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.180 13:04:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:09.180 ************************************ 00:05:09.180 START TEST driver 00:05:09.180 ************************************ 00:05:09.180 13:04:49 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:05:09.180 * Looking for test storage... 00:05:09.180 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:05:09.180 13:04:49 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:09.180 13:04:49 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.180 13:04:49 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:15.817 13:04:55 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:15.817 13:04:55 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.817 13:04:55 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.817 13:04:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:15.817 ************************************ 00:05:15.817 START TEST guess_driver 00:05:15.817 ************************************ 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:15.817 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:15.817 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:15.817 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:15.817 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:15.817 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:15.817 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:15.817 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:15.817 Looking for driver=vfio-pci 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.817 13:04:55 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.105 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:19.106 13:04:59 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.010 13:05:01 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:21.010 13:05:01 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:21.010 13:05:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.269 13:05:01 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:21.269 13:05:01 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:21.269 13:05:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.269 13:05:01 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:27.833 00:05:27.833 real 0m11.936s 00:05:27.833 user 0m3.143s 00:05:27.833 sys 0m6.087s 00:05:27.833 13:05:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.833 13:05:07 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:27.833 ************************************ 00:05:27.833 END TEST guess_driver 00:05:27.833 ************************************ 00:05:27.833 00:05:27.833 real 0m17.641s 00:05:27.833 user 0m4.772s 00:05:27.833 sys 0m9.318s 00:05:27.833 13:05:07 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.833 13:05:07 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:27.833 ************************************ 00:05:27.833 END TEST driver 00:05:27.833 ************************************ 00:05:27.833 13:05:07 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:05:27.833 13:05:07 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.833 13:05:07 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.833 13:05:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:27.833 ************************************ 00:05:27.833 START TEST devices 00:05:27.833 ************************************ 00:05:27.833 13:05:07 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:05:27.833 * Looking for test storage... 00:05:27.833 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:05:27.833 13:05:07 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:27.833 13:05:07 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:27.833 13:05:07 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:27.833 13:05:07 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:32.023 13:05:11 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:32.023 13:05:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:32.023 13:05:11 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:32.023 13:05:11 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:32.023 13:05:11 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:32.023 13:05:11 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:32.023 13:05:11 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:32.023 13:05:11 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:32.023 13:05:11 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:32.023 13:05:11 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:32.023 13:05:11 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:32.023 13:05:11 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:32.023 13:05:11 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:32.023 13:05:11 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:32.023 13:05:11 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:32.023 13:05:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:32.023 13:05:11 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:32.023 13:05:11 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:05:32.023 13:05:11 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:32.023 13:05:11 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:32.023 13:05:11 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:32.023 13:05:11 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:32.023 No valid GPT data, bailing 00:05:32.023 13:05:12 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:32.023 13:05:12 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:32.023 13:05:12 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:32.023 13:05:12 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:32.023 13:05:12 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:32.023 13:05:12 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:32.023 13:05:12 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:05:32.023 13:05:12 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:32.023 13:05:12 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:32.023 13:05:12 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:05:32.023 13:05:12 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:32.023 13:05:12 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:32.023 13:05:12 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:32.023 13:05:12 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.023 13:05:12 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.023 13:05:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:32.023 ************************************ 00:05:32.023 START TEST nvme_mount 00:05:32.023 ************************************ 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:32.023 13:05:12 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:32.591 Creating new GPT entries in memory. 00:05:32.591 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:32.591 other utilities. 00:05:32.591 13:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:32.591 13:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.591 13:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:32.591 13:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:32.591 13:05:13 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:33.691 Creating new GPT entries in memory. 00:05:33.691 The operation has completed successfully. 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 579190 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.691 13:05:14 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.882 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:37.883 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:37.883 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:38.142 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:38.142 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:38.142 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:38.142 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:38.142 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:38.142 13:05:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:38.142 13:05:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:38.142 13:05:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:38.142 13:05:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.401 13:05:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.588 13:05:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:45.902 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.161 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:46.161 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:46.161 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:46.161 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.420 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:46.420 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:46.420 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:46.420 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:46.420 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:46.420 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:46.420 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:46.420 13:05:26 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:46.420 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:46.420 00:05:46.420 real 0m14.661s 00:05:46.420 user 0m4.234s 00:05:46.420 sys 0m8.369s 00:05:46.420 13:05:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.420 13:05:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:46.420 ************************************ 00:05:46.420 END TEST nvme_mount 00:05:46.420 ************************************ 00:05:46.420 13:05:26 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:46.420 13:05:26 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.420 13:05:26 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.420 13:05:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:46.420 ************************************ 00:05:46.420 START TEST dm_mount 00:05:46.420 ************************************ 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:46.420 13:05:26 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:47.357 Creating new GPT entries in memory. 00:05:47.357 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:47.357 other utilities. 00:05:47.357 13:05:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:47.357 13:05:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:47.357 13:05:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:47.357 13:05:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:47.357 13:05:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:48.734 Creating new GPT entries in memory. 00:05:48.734 The operation has completed successfully. 00:05:48.734 13:05:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:48.734 13:05:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.734 13:05:28 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:48.734 13:05:28 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:48.734 13:05:28 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:49.672 The operation has completed successfully. 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 584461 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:49.672 13:05:29 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:49.672 13:05:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:53.867 13:05:33 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:53.867 13:05:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:58.058 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:58.058 00:05:58.058 real 0m11.434s 00:05:58.058 user 0m2.916s 00:05:58.058 sys 0m5.642s 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.058 13:05:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:58.058 ************************************ 00:05:58.058 END TEST dm_mount 00:05:58.058 ************************************ 00:05:58.058 13:05:38 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:58.058 13:05:38 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:58.058 13:05:38 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:58.058 13:05:38 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.058 13:05:38 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:58.058 13:05:38 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:58.058 13:05:38 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:58.058 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:58.058 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:58.058 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:58.058 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:58.058 13:05:38 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:58.059 13:05:38 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:58.318 13:05:38 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:58.318 13:05:38 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:58.318 13:05:38 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:58.318 13:05:38 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:58.318 13:05:38 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:58.318 00:05:58.318 real 0m31.305s 00:05:58.318 user 0m8.874s 00:05:58.318 sys 0m17.442s 00:05:58.318 13:05:38 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.318 13:05:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:58.318 ************************************ 00:05:58.318 END TEST devices 00:05:58.318 ************************************ 00:05:58.318 00:05:58.318 real 1m51.828s 00:05:58.318 user 0m34.365s 00:05:58.318 sys 1m5.381s 00:05:58.318 13:05:38 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.318 13:05:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:58.318 ************************************ 00:05:58.318 END TEST setup.sh 00:05:58.318 ************************************ 00:05:58.318 13:05:38 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:06:02.508 Hugepages 00:06:02.508 node hugesize free / total 00:06:02.508 node0 1048576kB 0 / 0 00:06:02.508 node0 2048kB 1024 / 1024 00:06:02.508 node1 1048576kB 0 / 0 00:06:02.508 node1 2048kB 1024 / 1024 00:06:02.508 00:06:02.508 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:02.508 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:06:02.508 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:06:02.508 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:06:02.508 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:06:02.508 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:06:02.508 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:06:02.508 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:06:02.508 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:06:02.508 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:06:02.508 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:06:02.508 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:06:02.508 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:06:02.508 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:06:02.508 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:06:02.508 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:06:02.508 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:06:02.508 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:02.508 13:05:42 -- spdk/autotest.sh@130 -- # uname -s 00:06:02.508 13:05:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:06:02.508 13:05:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:06:02.508 13:05:42 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:06.705 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:06.705 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:08.681 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:08.681 13:05:48 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:09.618 13:05:49 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:09.618 13:05:49 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:09.618 13:05:49 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:09.618 13:05:49 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:09.618 13:05:49 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:09.618 13:05:49 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:09.618 13:05:49 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:09.618 13:05:49 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:09.618 13:05:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:09.618 13:05:50 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:09.618 13:05:50 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:06:09.618 13:05:50 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:06:13.807 Waiting for block devices as requested 00:06:13.807 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:13.807 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:13.807 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:14.066 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:14.066 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:14.066 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:14.325 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:14.325 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:14.325 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:14.583 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:14.583 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:14.583 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:14.842 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:14.842 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:14.842 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:15.100 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:15.100 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:06:15.359 13:05:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:15.359 13:05:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:06:15.359 13:05:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:15.359 13:05:55 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:06:15.359 13:05:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:06:15.359 13:05:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:06:15.359 13:05:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:06:15.359 13:05:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:15.359 13:05:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:15.359 13:05:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:15.359 13:05:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:15.359 13:05:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:15.359 13:05:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:15.359 13:05:55 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:06:15.359 13:05:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:15.359 13:05:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:15.359 13:05:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:15.359 13:05:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:15.359 13:05:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:15.359 13:05:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:15.359 13:05:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:15.359 13:05:55 -- common/autotest_common.sh@1557 -- # continue 00:06:15.359 13:05:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:15.359 13:05:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.359 13:05:55 -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 13:05:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:15.359 13:05:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.359 13:05:55 -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 13:05:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:06:19.551 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:19.551 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:21.454 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:06:21.715 13:06:02 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:21.715 13:06:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:21.715 13:06:02 -- common/autotest_common.sh@10 -- # set +x 00:06:21.715 13:06:02 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:21.715 13:06:02 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:21.715 13:06:02 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:21.715 13:06:02 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:21.715 13:06:02 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:21.715 13:06:02 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:21.715 13:06:02 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:21.715 13:06:02 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:21.715 13:06:02 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:21.715 13:06:02 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:21.715 13:06:02 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:21.715 13:06:02 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:21.974 13:06:02 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:06:21.974 13:06:02 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:21.974 13:06:02 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:06:21.974 13:06:02 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:06:21.974 13:06:02 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:21.974 13:06:02 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:06:21.974 13:06:02 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:06:21.974 13:06:02 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:06:21.974 13:06:02 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=596025 00:06:21.974 13:06:02 -- common/autotest_common.sh@1598 -- # waitforlisten 596025 00:06:21.974 13:06:02 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:06:21.974 13:06:02 -- common/autotest_common.sh@831 -- # '[' -z 596025 ']' 00:06:21.974 13:06:02 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.974 13:06:02 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.974 13:06:02 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.974 13:06:02 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.974 13:06:02 -- common/autotest_common.sh@10 -- # set +x 00:06:21.974 [2024-07-26 13:06:02.320295] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:06:21.974 [2024-07-26 13:06:02.320353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid596025 ] 00:06:21.974 [2024-07-26 13:06:02.439346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.233 [2024-07-26 13:06:02.522959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.800 13:06:03 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.800 13:06:03 -- common/autotest_common.sh@864 -- # return 0 00:06:22.800 13:06:03 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:06:22.800 13:06:03 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:06:22.800 13:06:03 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:06:26.088 nvme0n1 00:06:26.088 13:06:06 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:26.088 [2024-07-26 13:06:06.498113] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:26.088 request: 00:06:26.088 { 00:06:26.088 "nvme_ctrlr_name": "nvme0", 00:06:26.088 "password": "test", 00:06:26.088 "method": "bdev_nvme_opal_revert", 00:06:26.088 "req_id": 1 00:06:26.088 } 00:06:26.088 Got JSON-RPC error response 00:06:26.088 response: 00:06:26.088 { 00:06:26.088 "code": -32602, 00:06:26.088 "message": "Invalid parameters" 00:06:26.088 } 00:06:26.088 13:06:06 -- common/autotest_common.sh@1604 -- # true 00:06:26.088 13:06:06 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:06:26.088 13:06:06 -- common/autotest_common.sh@1608 -- # killprocess 596025 00:06:26.088 13:06:06 -- common/autotest_common.sh@950 -- # '[' -z 596025 ']' 00:06:26.088 13:06:06 -- common/autotest_common.sh@954 -- # kill -0 596025 00:06:26.088 13:06:06 -- common/autotest_common.sh@955 -- # uname 00:06:26.088 13:06:06 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.088 13:06:06 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 596025 00:06:26.088 13:06:06 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.089 13:06:06 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.089 13:06:06 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 596025' 00:06:26.089 killing process with pid 596025 00:06:26.089 13:06:06 -- common/autotest_common.sh@969 -- # kill 596025 00:06:26.089 13:06:06 -- common/autotest_common.sh@974 -- # wait 596025 00:06:29.373 13:06:09 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:29.373 13:06:09 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:29.373 13:06:09 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:06:29.373 13:06:09 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:06:29.373 13:06:09 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:06:29.631 Restarting all devices. 00:06:36.263 lstat() error: No such file or directory 00:06:36.263 QAT Error: No GENERAL section found 00:06:36.263 Failed to configure qat_dev0 00:06:36.263 lstat() error: No such file or directory 00:06:36.263 QAT Error: No GENERAL section found 00:06:36.263 Failed to configure qat_dev1 00:06:36.263 lstat() error: No such file or directory 00:06:36.263 QAT Error: No GENERAL section found 00:06:36.263 Failed to configure qat_dev2 00:06:36.263 lstat() error: No such file or directory 00:06:36.263 QAT Error: No GENERAL section found 00:06:36.263 Failed to configure qat_dev3 00:06:36.263 lstat() error: No such file or directory 00:06:36.263 QAT Error: No GENERAL section found 00:06:36.263 Failed to configure qat_dev4 00:06:36.263 enable sriov 00:06:36.263 Checking status of all devices. 00:06:36.263 There is 5 QAT acceleration device(s) in the system: 00:06:36.263 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:06:36.263 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:06:36.263 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:06:36.263 qat_dev3 - type: c6xx, inst_id: 3, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:06:36.263 qat_dev4 - type: c6xx, inst_id: 4, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:06:36.263 0000:1a:00.0 set to 16 VFs 00:06:36.830 0000:1c:00.0 set to 16 VFs 00:06:37.764 0000:1e:00.0 set to 16 VFs 00:06:38.701 0000:3d:00.0 set to 16 VFs 00:06:39.267 0000:3f:00.0 set to 16 VFs 00:06:41.798 Properly configured the qat device with driver uio_pci_generic. 00:06:41.798 13:06:22 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:41.798 13:06:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.798 13:06:22 -- common/autotest_common.sh@10 -- # set +x 00:06:41.798 13:06:22 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:41.798 13:06:22 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:06:41.798 13:06:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.798 13:06:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.798 13:06:22 -- common/autotest_common.sh@10 -- # set +x 00:06:41.798 ************************************ 00:06:41.798 START TEST env 00:06:41.798 ************************************ 00:06:41.798 13:06:22 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:06:41.798 * Looking for test storage... 00:06:41.798 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:06:41.798 13:06:22 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:06:41.798 13:06:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.798 13:06:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.798 13:06:22 env -- common/autotest_common.sh@10 -- # set +x 00:06:41.798 ************************************ 00:06:41.798 START TEST env_memory 00:06:41.798 ************************************ 00:06:41.798 13:06:22 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:06:41.798 00:06:41.798 00:06:41.798 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.798 http://cunit.sourceforge.net/ 00:06:41.798 00:06:41.798 00:06:41.798 Suite: memory 00:06:41.798 Test: alloc and free memory map ...[2024-07-26 13:06:22.305740] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:41.798 passed 00:06:42.057 Test: mem map translation ...[2024-07-26 13:06:22.332647] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:42.057 [2024-07-26 13:06:22.332668] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:42.057 [2024-07-26 13:06:22.332721] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:42.057 [2024-07-26 13:06:22.332733] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:42.057 passed 00:06:42.058 Test: mem map registration ...[2024-07-26 13:06:22.385850] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:42.058 [2024-07-26 13:06:22.385870] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:42.058 passed 00:06:42.058 Test: mem map adjacent registrations ...passed 00:06:42.058 00:06:42.058 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.058 suites 1 1 n/a 0 0 00:06:42.058 tests 4 4 4 0 0 00:06:42.058 asserts 152 152 152 0 n/a 00:06:42.058 00:06:42.058 Elapsed time = 0.186 seconds 00:06:42.058 00:06:42.058 real 0m0.201s 00:06:42.058 user 0m0.187s 00:06:42.058 sys 0m0.013s 00:06:42.058 13:06:22 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.058 13:06:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:42.058 ************************************ 00:06:42.058 END TEST env_memory 00:06:42.058 ************************************ 00:06:42.058 13:06:22 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:42.058 13:06:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.058 13:06:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.058 13:06:22 env -- common/autotest_common.sh@10 -- # set +x 00:06:42.058 ************************************ 00:06:42.058 START TEST env_vtophys 00:06:42.058 ************************************ 00:06:42.058 13:06:22 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:42.058 EAL: lib.eal log level changed from notice to debug 00:06:42.058 EAL: Detected lcore 0 as core 0 on socket 0 00:06:42.058 EAL: Detected lcore 1 as core 1 on socket 0 00:06:42.058 EAL: Detected lcore 2 as core 2 on socket 0 00:06:42.058 EAL: Detected lcore 3 as core 3 on socket 0 00:06:42.058 EAL: Detected lcore 4 as core 4 on socket 0 00:06:42.058 EAL: Detected lcore 5 as core 5 on socket 0 00:06:42.058 EAL: Detected lcore 6 as core 6 on socket 0 00:06:42.058 EAL: Detected lcore 7 as core 8 on socket 0 00:06:42.058 EAL: Detected lcore 8 as core 9 on socket 0 00:06:42.058 EAL: Detected lcore 9 as core 10 on socket 0 00:06:42.058 EAL: Detected lcore 10 as core 11 on socket 0 00:06:42.058 EAL: Detected lcore 11 as core 12 on socket 0 00:06:42.058 EAL: Detected lcore 12 as core 13 on socket 0 00:06:42.058 EAL: Detected lcore 13 as core 14 on socket 0 00:06:42.058 EAL: Detected lcore 14 as core 16 on socket 0 00:06:42.058 EAL: Detected lcore 15 as core 17 on socket 0 00:06:42.058 EAL: Detected lcore 16 as core 18 on socket 0 00:06:42.058 EAL: Detected lcore 17 as core 19 on socket 0 00:06:42.058 EAL: Detected lcore 18 as core 20 on socket 0 00:06:42.058 EAL: Detected lcore 19 as core 21 on socket 0 00:06:42.058 EAL: Detected lcore 20 as core 22 on socket 0 00:06:42.058 EAL: Detected lcore 21 as core 24 on socket 0 00:06:42.058 EAL: Detected lcore 22 as core 25 on socket 0 00:06:42.058 EAL: Detected lcore 23 as core 26 on socket 0 00:06:42.058 EAL: Detected lcore 24 as core 27 on socket 0 00:06:42.058 EAL: Detected lcore 25 as core 28 on socket 0 00:06:42.058 EAL: Detected lcore 26 as core 29 on socket 0 00:06:42.058 EAL: Detected lcore 27 as core 30 on socket 0 00:06:42.058 EAL: Detected lcore 28 as core 0 on socket 1 00:06:42.058 EAL: Detected lcore 29 as core 1 on socket 1 00:06:42.058 EAL: Detected lcore 30 as core 2 on socket 1 00:06:42.058 EAL: Detected lcore 31 as core 3 on socket 1 00:06:42.058 EAL: Detected lcore 32 as core 4 on socket 1 00:06:42.058 EAL: Detected lcore 33 as core 5 on socket 1 00:06:42.058 EAL: Detected lcore 34 as core 6 on socket 1 00:06:42.058 EAL: Detected lcore 35 as core 8 on socket 1 00:06:42.058 EAL: Detected lcore 36 as core 9 on socket 1 00:06:42.058 EAL: Detected lcore 37 as core 10 on socket 1 00:06:42.058 EAL: Detected lcore 38 as core 11 on socket 1 00:06:42.058 EAL: Detected lcore 39 as core 12 on socket 1 00:06:42.058 EAL: Detected lcore 40 as core 13 on socket 1 00:06:42.058 EAL: Detected lcore 41 as core 14 on socket 1 00:06:42.058 EAL: Detected lcore 42 as core 16 on socket 1 00:06:42.058 EAL: Detected lcore 43 as core 17 on socket 1 00:06:42.058 EAL: Detected lcore 44 as core 18 on socket 1 00:06:42.058 EAL: Detected lcore 45 as core 19 on socket 1 00:06:42.058 EAL: Detected lcore 46 as core 20 on socket 1 00:06:42.058 EAL: Detected lcore 47 as core 21 on socket 1 00:06:42.058 EAL: Detected lcore 48 as core 22 on socket 1 00:06:42.058 EAL: Detected lcore 49 as core 24 on socket 1 00:06:42.058 EAL: Detected lcore 50 as core 25 on socket 1 00:06:42.058 EAL: Detected lcore 51 as core 26 on socket 1 00:06:42.058 EAL: Detected lcore 52 as core 27 on socket 1 00:06:42.058 EAL: Detected lcore 53 as core 28 on socket 1 00:06:42.058 EAL: Detected lcore 54 as core 29 on socket 1 00:06:42.058 EAL: Detected lcore 55 as core 30 on socket 1 00:06:42.058 EAL: Detected lcore 56 as core 0 on socket 0 00:06:42.058 EAL: Detected lcore 57 as core 1 on socket 0 00:06:42.058 EAL: Detected lcore 58 as core 2 on socket 0 00:06:42.058 EAL: Detected lcore 59 as core 3 on socket 0 00:06:42.058 EAL: Detected lcore 60 as core 4 on socket 0 00:06:42.058 EAL: Detected lcore 61 as core 5 on socket 0 00:06:42.058 EAL: Detected lcore 62 as core 6 on socket 0 00:06:42.058 EAL: Detected lcore 63 as core 8 on socket 0 00:06:42.058 EAL: Detected lcore 64 as core 9 on socket 0 00:06:42.058 EAL: Detected lcore 65 as core 10 on socket 0 00:06:42.058 EAL: Detected lcore 66 as core 11 on socket 0 00:06:42.058 EAL: Detected lcore 67 as core 12 on socket 0 00:06:42.058 EAL: Detected lcore 68 as core 13 on socket 0 00:06:42.058 EAL: Detected lcore 69 as core 14 on socket 0 00:06:42.058 EAL: Detected lcore 70 as core 16 on socket 0 00:06:42.058 EAL: Detected lcore 71 as core 17 on socket 0 00:06:42.058 EAL: Detected lcore 72 as core 18 on socket 0 00:06:42.058 EAL: Detected lcore 73 as core 19 on socket 0 00:06:42.058 EAL: Detected lcore 74 as core 20 on socket 0 00:06:42.058 EAL: Detected lcore 75 as core 21 on socket 0 00:06:42.058 EAL: Detected lcore 76 as core 22 on socket 0 00:06:42.058 EAL: Detected lcore 77 as core 24 on socket 0 00:06:42.058 EAL: Detected lcore 78 as core 25 on socket 0 00:06:42.058 EAL: Detected lcore 79 as core 26 on socket 0 00:06:42.058 EAL: Detected lcore 80 as core 27 on socket 0 00:06:42.058 EAL: Detected lcore 81 as core 28 on socket 0 00:06:42.058 EAL: Detected lcore 82 as core 29 on socket 0 00:06:42.058 EAL: Detected lcore 83 as core 30 on socket 0 00:06:42.058 EAL: Detected lcore 84 as core 0 on socket 1 00:06:42.058 EAL: Detected lcore 85 as core 1 on socket 1 00:06:42.058 EAL: Detected lcore 86 as core 2 on socket 1 00:06:42.058 EAL: Detected lcore 87 as core 3 on socket 1 00:06:42.058 EAL: Detected lcore 88 as core 4 on socket 1 00:06:42.058 EAL: Detected lcore 89 as core 5 on socket 1 00:06:42.058 EAL: Detected lcore 90 as core 6 on socket 1 00:06:42.058 EAL: Detected lcore 91 as core 8 on socket 1 00:06:42.058 EAL: Detected lcore 92 as core 9 on socket 1 00:06:42.058 EAL: Detected lcore 93 as core 10 on socket 1 00:06:42.058 EAL: Detected lcore 94 as core 11 on socket 1 00:06:42.058 EAL: Detected lcore 95 as core 12 on socket 1 00:06:42.058 EAL: Detected lcore 96 as core 13 on socket 1 00:06:42.058 EAL: Detected lcore 97 as core 14 on socket 1 00:06:42.058 EAL: Detected lcore 98 as core 16 on socket 1 00:06:42.058 EAL: Detected lcore 99 as core 17 on socket 1 00:06:42.058 EAL: Detected lcore 100 as core 18 on socket 1 00:06:42.058 EAL: Detected lcore 101 as core 19 on socket 1 00:06:42.058 EAL: Detected lcore 102 as core 20 on socket 1 00:06:42.058 EAL: Detected lcore 103 as core 21 on socket 1 00:06:42.058 EAL: Detected lcore 104 as core 22 on socket 1 00:06:42.058 EAL: Detected lcore 105 as core 24 on socket 1 00:06:42.058 EAL: Detected lcore 106 as core 25 on socket 1 00:06:42.058 EAL: Detected lcore 107 as core 26 on socket 1 00:06:42.058 EAL: Detected lcore 108 as core 27 on socket 1 00:06:42.058 EAL: Detected lcore 109 as core 28 on socket 1 00:06:42.058 EAL: Detected lcore 110 as core 29 on socket 1 00:06:42.058 EAL: Detected lcore 111 as core 30 on socket 1 00:06:42.058 EAL: Maximum logical cores by configuration: 128 00:06:42.058 EAL: Detected CPU lcores: 112 00:06:42.058 EAL: Detected NUMA nodes: 2 00:06:42.058 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:42.058 EAL: Detected shared linkage of DPDK 00:06:42.058 EAL: No shared files mode enabled, IPC will be disabled 00:06:42.320 EAL: No shared files mode enabled, IPC is disabled 00:06:42.320 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:06:42.320 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:06:42.320 EAL: Bus pci wants IOVA as 'PA' 00:06:42.320 EAL: Bus auxiliary wants IOVA as 'DC' 00:06:42.320 EAL: Bus vdev wants IOVA as 'DC' 00:06:42.320 EAL: Selected IOVA mode 'PA' 00:06:42.320 EAL: Probing VFIO support... 00:06:42.320 EAL: IOMMU type 1 (Type 1) is supported 00:06:42.320 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:42.320 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:42.320 EAL: VFIO support initialized 00:06:42.320 EAL: Ask a virtual area of 0x2e000 bytes 00:06:42.320 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:42.320 EAL: Setting up physically contiguous memory... 00:06:42.320 EAL: Setting maximum number of open files to 524288 00:06:42.321 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:42.321 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:42.321 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:42.321 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.321 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:42.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:42.321 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.321 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:42.321 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:42.321 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.321 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:42.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:42.321 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.321 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:42.321 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:42.321 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.321 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:42.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:42.321 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.321 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:42.321 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:42.321 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.321 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:42.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:42.321 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.321 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:42.321 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:42.321 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:42.321 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.321 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:42.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:42.321 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.321 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:42.321 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:42.321 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.321 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:42.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:42.321 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.321 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:42.321 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:42.321 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.321 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:42.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:42.321 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.321 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:42.321 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:42.321 EAL: Ask a virtual area of 0x61000 bytes 00:06:42.321 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:42.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:42.321 EAL: Ask a virtual area of 0x400000000 bytes 00:06:42.321 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:42.321 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:42.321 EAL: Hugepages will be freed exactly as allocated. 00:06:42.321 EAL: No shared files mode enabled, IPC is disabled 00:06:42.321 EAL: No shared files mode enabled, IPC is disabled 00:06:42.321 EAL: TSC frequency is ~2500000 KHz 00:06:42.321 EAL: Main lcore 0 is ready (tid=7f4c0dfc8b00;cpuset=[0]) 00:06:42.321 EAL: Trying to obtain current memory policy. 00:06:42.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.321 EAL: Restoring previous memory policy: 0 00:06:42.321 EAL: request: mp_malloc_sync 00:06:42.321 EAL: No shared files mode enabled, IPC is disabled 00:06:42.321 EAL: Heap on socket 0 was expanded by 2MB 00:06:42.321 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x202001000000 00:06:42.321 EAL: PCI memory mapped at 0x202001001000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x202001002000 00:06:42.321 EAL: PCI memory mapped at 0x202001003000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x202001004000 00:06:42.321 EAL: PCI memory mapped at 0x202001005000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x202001006000 00:06:42.321 EAL: PCI memory mapped at 0x202001007000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x202001008000 00:06:42.321 EAL: PCI memory mapped at 0x202001009000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x20200100a000 00:06:42.321 EAL: PCI memory mapped at 0x20200100b000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x20200100c000 00:06:42.321 EAL: PCI memory mapped at 0x20200100d000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x20200100e000 00:06:42.321 EAL: PCI memory mapped at 0x20200100f000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x202001010000 00:06:42.321 EAL: PCI memory mapped at 0x202001011000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x202001012000 00:06:42.321 EAL: PCI memory mapped at 0x202001013000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x202001014000 00:06:42.321 EAL: PCI memory mapped at 0x202001015000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x202001016000 00:06:42.321 EAL: PCI memory mapped at 0x202001017000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x202001018000 00:06:42.321 EAL: PCI memory mapped at 0x202001019000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x20200101a000 00:06:42.321 EAL: PCI memory mapped at 0x20200101b000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x20200101c000 00:06:42.321 EAL: PCI memory mapped at 0x20200101d000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:42.321 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x20200101e000 00:06:42.321 EAL: PCI memory mapped at 0x20200101f000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:42.321 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:06:42.321 EAL: probe driver: 8086:37c9 qat 00:06:42.321 EAL: PCI memory mapped at 0x202001020000 00:06:42.321 EAL: PCI memory mapped at 0x202001021000 00:06:42.321 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001022000 00:06:42.322 EAL: PCI memory mapped at 0x202001023000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001024000 00:06:42.322 EAL: PCI memory mapped at 0x202001025000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001026000 00:06:42.322 EAL: PCI memory mapped at 0x202001027000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001028000 00:06:42.322 EAL: PCI memory mapped at 0x202001029000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x20200102a000 00:06:42.322 EAL: PCI memory mapped at 0x20200102b000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x20200102c000 00:06:42.322 EAL: PCI memory mapped at 0x20200102d000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x20200102e000 00:06:42.322 EAL: PCI memory mapped at 0x20200102f000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001030000 00:06:42.322 EAL: PCI memory mapped at 0x202001031000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001032000 00:06:42.322 EAL: PCI memory mapped at 0x202001033000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001034000 00:06:42.322 EAL: PCI memory mapped at 0x202001035000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001036000 00:06:42.322 EAL: PCI memory mapped at 0x202001037000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001038000 00:06:42.322 EAL: PCI memory mapped at 0x202001039000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x20200103a000 00:06:42.322 EAL: PCI memory mapped at 0x20200103b000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x20200103c000 00:06:42.322 EAL: PCI memory mapped at 0x20200103d000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:42.322 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x20200103e000 00:06:42.322 EAL: PCI memory mapped at 0x20200103f000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001040000 00:06:42.322 EAL: PCI memory mapped at 0x202001041000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001042000 00:06:42.322 EAL: PCI memory mapped at 0x202001043000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001044000 00:06:42.322 EAL: PCI memory mapped at 0x202001045000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001046000 00:06:42.322 EAL: PCI memory mapped at 0x202001047000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001048000 00:06:42.322 EAL: PCI memory mapped at 0x202001049000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x20200104a000 00:06:42.322 EAL: PCI memory mapped at 0x20200104b000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x20200104c000 00:06:42.322 EAL: PCI memory mapped at 0x20200104d000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x20200104e000 00:06:42.322 EAL: PCI memory mapped at 0x20200104f000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001050000 00:06:42.322 EAL: PCI memory mapped at 0x202001051000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001052000 00:06:42.322 EAL: PCI memory mapped at 0x202001053000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001054000 00:06:42.322 EAL: PCI memory mapped at 0x202001055000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001056000 00:06:42.322 EAL: PCI memory mapped at 0x202001057000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x202001058000 00:06:42.322 EAL: PCI memory mapped at 0x202001059000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x20200105a000 00:06:42.322 EAL: PCI memory mapped at 0x20200105b000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:06:42.322 EAL: probe driver: 8086:37c9 qat 00:06:42.322 EAL: PCI memory mapped at 0x20200105c000 00:06:42.322 EAL: PCI memory mapped at 0x20200105d000 00:06:42.322 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:42.322 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x20200105e000 00:06:42.323 EAL: PCI memory mapped at 0x20200105f000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:42.323 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x202001060000 00:06:42.323 EAL: PCI memory mapped at 0x202001061000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x202001060000 00:06:42.323 EAL: PCI memory unmapped at 0x202001061000 00:06:42.323 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x202001062000 00:06:42.323 EAL: PCI memory mapped at 0x202001063000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x202001062000 00:06:42.323 EAL: PCI memory unmapped at 0x202001063000 00:06:42.323 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x202001064000 00:06:42.323 EAL: PCI memory mapped at 0x202001065000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x202001064000 00:06:42.323 EAL: PCI memory unmapped at 0x202001065000 00:06:42.323 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x202001066000 00:06:42.323 EAL: PCI memory mapped at 0x202001067000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x202001066000 00:06:42.323 EAL: PCI memory unmapped at 0x202001067000 00:06:42.323 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x202001068000 00:06:42.323 EAL: PCI memory mapped at 0x202001069000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x202001068000 00:06:42.323 EAL: PCI memory unmapped at 0x202001069000 00:06:42.323 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x20200106a000 00:06:42.323 EAL: PCI memory mapped at 0x20200106b000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x20200106a000 00:06:42.323 EAL: PCI memory unmapped at 0x20200106b000 00:06:42.323 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x20200106c000 00:06:42.323 EAL: PCI memory mapped at 0x20200106d000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x20200106c000 00:06:42.323 EAL: PCI memory unmapped at 0x20200106d000 00:06:42.323 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x20200106e000 00:06:42.323 EAL: PCI memory mapped at 0x20200106f000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x20200106e000 00:06:42.323 EAL: PCI memory unmapped at 0x20200106f000 00:06:42.323 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x202001070000 00:06:42.323 EAL: PCI memory mapped at 0x202001071000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x202001070000 00:06:42.323 EAL: PCI memory unmapped at 0x202001071000 00:06:42.323 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x202001072000 00:06:42.323 EAL: PCI memory mapped at 0x202001073000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x202001072000 00:06:42.323 EAL: PCI memory unmapped at 0x202001073000 00:06:42.323 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x202001074000 00:06:42.323 EAL: PCI memory mapped at 0x202001075000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x202001074000 00:06:42.323 EAL: PCI memory unmapped at 0x202001075000 00:06:42.323 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x202001076000 00:06:42.323 EAL: PCI memory mapped at 0x202001077000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x202001076000 00:06:42.323 EAL: PCI memory unmapped at 0x202001077000 00:06:42.323 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x202001078000 00:06:42.323 EAL: PCI memory mapped at 0x202001079000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x202001078000 00:06:42.323 EAL: PCI memory unmapped at 0x202001079000 00:06:42.323 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x20200107a000 00:06:42.323 EAL: PCI memory mapped at 0x20200107b000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x20200107a000 00:06:42.323 EAL: PCI memory unmapped at 0x20200107b000 00:06:42.323 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x20200107c000 00:06:42.323 EAL: PCI memory mapped at 0x20200107d000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x20200107c000 00:06:42.323 EAL: PCI memory unmapped at 0x20200107d000 00:06:42.323 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:42.323 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.323 EAL: PCI memory mapped at 0x20200107e000 00:06:42.323 EAL: PCI memory mapped at 0x20200107f000 00:06:42.323 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:42.323 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.323 EAL: PCI memory unmapped at 0x20200107e000 00:06:42.323 EAL: PCI memory unmapped at 0x20200107f000 00:06:42.323 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:42.323 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:06:42.323 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x202001080000 00:06:42.324 EAL: PCI memory mapped at 0x202001081000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x202001080000 00:06:42.324 EAL: PCI memory unmapped at 0x202001081000 00:06:42.324 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x202001082000 00:06:42.324 EAL: PCI memory mapped at 0x202001083000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x202001082000 00:06:42.324 EAL: PCI memory unmapped at 0x202001083000 00:06:42.324 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x202001084000 00:06:42.324 EAL: PCI memory mapped at 0x202001085000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x202001084000 00:06:42.324 EAL: PCI memory unmapped at 0x202001085000 00:06:42.324 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x202001086000 00:06:42.324 EAL: PCI memory mapped at 0x202001087000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x202001086000 00:06:42.324 EAL: PCI memory unmapped at 0x202001087000 00:06:42.324 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x202001088000 00:06:42.324 EAL: PCI memory mapped at 0x202001089000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x202001088000 00:06:42.324 EAL: PCI memory unmapped at 0x202001089000 00:06:42.324 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x20200108a000 00:06:42.324 EAL: PCI memory mapped at 0x20200108b000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x20200108a000 00:06:42.324 EAL: PCI memory unmapped at 0x20200108b000 00:06:42.324 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x20200108c000 00:06:42.324 EAL: PCI memory mapped at 0x20200108d000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x20200108c000 00:06:42.324 EAL: PCI memory unmapped at 0x20200108d000 00:06:42.324 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x20200108e000 00:06:42.324 EAL: PCI memory mapped at 0x20200108f000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x20200108e000 00:06:42.324 EAL: PCI memory unmapped at 0x20200108f000 00:06:42.324 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x202001090000 00:06:42.324 EAL: PCI memory mapped at 0x202001091000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x202001090000 00:06:42.324 EAL: PCI memory unmapped at 0x202001091000 00:06:42.324 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x202001092000 00:06:42.324 EAL: PCI memory mapped at 0x202001093000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x202001092000 00:06:42.324 EAL: PCI memory unmapped at 0x202001093000 00:06:42.324 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x202001094000 00:06:42.324 EAL: PCI memory mapped at 0x202001095000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x202001094000 00:06:42.324 EAL: PCI memory unmapped at 0x202001095000 00:06:42.324 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x202001096000 00:06:42.324 EAL: PCI memory mapped at 0x202001097000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x202001096000 00:06:42.324 EAL: PCI memory unmapped at 0x202001097000 00:06:42.324 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x202001098000 00:06:42.324 EAL: PCI memory mapped at 0x202001099000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x202001098000 00:06:42.324 EAL: PCI memory unmapped at 0x202001099000 00:06:42.324 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x20200109a000 00:06:42.324 EAL: PCI memory mapped at 0x20200109b000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x20200109a000 00:06:42.324 EAL: PCI memory unmapped at 0x20200109b000 00:06:42.324 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x20200109c000 00:06:42.324 EAL: PCI memory mapped at 0x20200109d000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x20200109c000 00:06:42.324 EAL: PCI memory unmapped at 0x20200109d000 00:06:42.324 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:42.324 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:06:42.324 EAL: probe driver: 8086:37c9 qat 00:06:42.324 EAL: PCI memory mapped at 0x20200109e000 00:06:42.324 EAL: PCI memory mapped at 0x20200109f000 00:06:42.324 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:42.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:42.324 EAL: PCI memory unmapped at 0x20200109e000 00:06:42.324 EAL: PCI memory unmapped at 0x20200109f000 00:06:42.324 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:42.324 EAL: No shared files mode enabled, IPC is disabled 00:06:42.324 EAL: No shared files mode enabled, IPC is disabled 00:06:42.324 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:42.324 EAL: Mem event callback 'spdk:(nil)' registered 00:06:42.324 00:06:42.324 00:06:42.324 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.324 http://cunit.sourceforge.net/ 00:06:42.324 00:06:42.325 00:06:42.325 Suite: components_suite 00:06:42.325 Test: vtophys_malloc_test ...passed 00:06:42.325 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:42.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.325 EAL: Restoring previous memory policy: 4 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was expanded by 4MB 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was shrunk by 4MB 00:06:42.325 EAL: Trying to obtain current memory policy. 00:06:42.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.325 EAL: Restoring previous memory policy: 4 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was expanded by 6MB 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was shrunk by 6MB 00:06:42.325 EAL: Trying to obtain current memory policy. 00:06:42.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.325 EAL: Restoring previous memory policy: 4 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was expanded by 10MB 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was shrunk by 10MB 00:06:42.325 EAL: Trying to obtain current memory policy. 00:06:42.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.325 EAL: Restoring previous memory policy: 4 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was expanded by 18MB 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was shrunk by 18MB 00:06:42.325 EAL: Trying to obtain current memory policy. 00:06:42.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.325 EAL: Restoring previous memory policy: 4 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was expanded by 34MB 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was shrunk by 34MB 00:06:42.325 EAL: Trying to obtain current memory policy. 00:06:42.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.325 EAL: Restoring previous memory policy: 4 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was expanded by 66MB 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was shrunk by 66MB 00:06:42.325 EAL: Trying to obtain current memory policy. 00:06:42.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.325 EAL: Restoring previous memory policy: 4 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was expanded by 130MB 00:06:42.325 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.325 EAL: request: mp_malloc_sync 00:06:42.325 EAL: No shared files mode enabled, IPC is disabled 00:06:42.325 EAL: Heap on socket 0 was shrunk by 130MB 00:06:42.325 EAL: Trying to obtain current memory policy. 00:06:42.325 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.584 EAL: Restoring previous memory policy: 4 00:06:42.584 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.584 EAL: request: mp_malloc_sync 00:06:42.584 EAL: No shared files mode enabled, IPC is disabled 00:06:42.584 EAL: Heap on socket 0 was expanded by 258MB 00:06:42.584 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.584 EAL: request: mp_malloc_sync 00:06:42.584 EAL: No shared files mode enabled, IPC is disabled 00:06:42.584 EAL: Heap on socket 0 was shrunk by 258MB 00:06:42.584 EAL: Trying to obtain current memory policy. 00:06:42.584 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:42.584 EAL: Restoring previous memory policy: 4 00:06:42.584 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.584 EAL: request: mp_malloc_sync 00:06:42.584 EAL: No shared files mode enabled, IPC is disabled 00:06:42.584 EAL: Heap on socket 0 was expanded by 514MB 00:06:42.843 EAL: Calling mem event callback 'spdk:(nil)' 00:06:42.843 EAL: request: mp_malloc_sync 00:06:42.843 EAL: No shared files mode enabled, IPC is disabled 00:06:42.843 EAL: Heap on socket 0 was shrunk by 514MB 00:06:42.843 EAL: Trying to obtain current memory policy. 00:06:42.843 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:43.101 EAL: Restoring previous memory policy: 4 00:06:43.101 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.101 EAL: request: mp_malloc_sync 00:06:43.101 EAL: No shared files mode enabled, IPC is disabled 00:06:43.101 EAL: Heap on socket 0 was expanded by 1026MB 00:06:43.101 EAL: Calling mem event callback 'spdk:(nil)' 00:06:43.360 EAL: request: mp_malloc_sync 00:06:43.360 EAL: No shared files mode enabled, IPC is disabled 00:06:43.360 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:43.360 passed 00:06:43.360 00:06:43.360 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.360 suites 1 1 n/a 0 0 00:06:43.360 tests 2 2 2 0 0 00:06:43.360 asserts 6590 6590 6590 0 n/a 00:06:43.360 00:06:43.360 Elapsed time = 1.017 seconds 00:06:43.360 EAL: No shared files mode enabled, IPC is disabled 00:06:43.360 EAL: No shared files mode enabled, IPC is disabled 00:06:43.360 EAL: No shared files mode enabled, IPC is disabled 00:06:43.360 00:06:43.360 real 0m1.223s 00:06:43.360 user 0m0.698s 00:06:43.360 sys 0m0.494s 00:06:43.360 13:06:23 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.360 13:06:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:43.360 ************************************ 00:06:43.360 END TEST env_vtophys 00:06:43.361 ************************************ 00:06:43.361 13:06:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:06:43.361 13:06:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.361 13:06:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.361 13:06:23 env -- common/autotest_common.sh@10 -- # set +x 00:06:43.361 ************************************ 00:06:43.361 START TEST env_pci 00:06:43.361 ************************************ 00:06:43.361 13:06:23 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:06:43.361 00:06:43.361 00:06:43.361 CUnit - A unit testing framework for C - Version 2.1-3 00:06:43.361 http://cunit.sourceforge.net/ 00:06:43.361 00:06:43.361 00:06:43.361 Suite: pci 00:06:43.361 Test: pci_hook ...[2024-07-26 13:06:23.871355] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 600360 has claimed it 00:06:43.621 EAL: Cannot find device (10000:00:01.0) 00:06:43.621 EAL: Failed to attach device on primary process 00:06:43.621 passed 00:06:43.621 00:06:43.621 Run Summary: Type Total Ran Passed Failed Inactive 00:06:43.621 suites 1 1 n/a 0 0 00:06:43.621 tests 1 1 1 0 0 00:06:43.621 asserts 25 25 25 0 n/a 00:06:43.621 00:06:43.621 Elapsed time = 0.046 seconds 00:06:43.621 00:06:43.621 real 0m0.071s 00:06:43.621 user 0m0.026s 00:06:43.621 sys 0m0.045s 00:06:43.621 13:06:23 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.621 13:06:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:43.621 ************************************ 00:06:43.621 END TEST env_pci 00:06:43.621 ************************************ 00:06:43.621 13:06:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:43.621 13:06:23 env -- env/env.sh@15 -- # uname 00:06:43.621 13:06:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:43.621 13:06:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:43.621 13:06:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:43.621 13:06:23 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:43.621 13:06:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.621 13:06:23 env -- common/autotest_common.sh@10 -- # set +x 00:06:43.621 ************************************ 00:06:43.621 START TEST env_dpdk_post_init 00:06:43.621 ************************************ 00:06:43.621 13:06:24 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:43.621 EAL: Detected CPU lcores: 112 00:06:43.621 EAL: Detected NUMA nodes: 2 00:06:43.621 EAL: Detected shared linkage of DPDK 00:06:43.621 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:43.621 EAL: Selected IOVA mode 'PA' 00:06:43.621 EAL: VFIO support initialized 00:06:43.621 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:43.621 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:06:43.621 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.621 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:06:43.621 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.621 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:43.621 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:06:43.621 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.621 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:06:43.621 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.621 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:43.621 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.622 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:06:43.622 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.622 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:43.623 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:06:43.623 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:43.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.623 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:43.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.623 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:43.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.623 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:43.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.623 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:43.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.623 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:43.623 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:43.623 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.623 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:43.624 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:43.624 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.624 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:43.624 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:43.884 EAL: Using IOMMU type 1 (Type 1) 00:06:43.884 EAL: Ignore mapping IO port bar(1) 00:06:43.884 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:43.884 EAL: Ignore mapping IO port bar(1) 00:06:43.884 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:43.884 EAL: Ignore mapping IO port bar(1) 00:06:43.884 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:43.884 EAL: Ignore mapping IO port bar(1) 00:06:43.884 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:43.884 EAL: Ignore mapping IO port bar(1) 00:06:43.884 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:43.884 EAL: Ignore mapping IO port bar(1) 00:06:43.884 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:43.884 EAL: Ignore mapping IO port bar(1) 00:06:43.884 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:43.884 EAL: Ignore mapping IO port bar(1) 00:06:43.884 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:43.884 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:43.884 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.884 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:43.885 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:43.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.885 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:43.885 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:43.885 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:43.885 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:43.885 EAL: Ignore mapping IO port bar(1) 00:06:43.885 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:43.885 EAL: Ignore mapping IO port bar(1) 00:06:43.885 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:43.885 EAL: Ignore mapping IO port bar(1) 00:06:43.885 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:43.885 EAL: Ignore mapping IO port bar(1) 00:06:43.885 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:44.143 EAL: Ignore mapping IO port bar(1) 00:06:44.143 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:44.143 EAL: Ignore mapping IO port bar(1) 00:06:44.143 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:44.143 EAL: Ignore mapping IO port bar(1) 00:06:44.143 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:44.143 EAL: Ignore mapping IO port bar(1) 00:06:44.143 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:44.711 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:06:48.896 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:06:48.896 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:06:49.156 Starting DPDK initialization... 00:06:49.156 Starting SPDK post initialization... 00:06:49.156 SPDK NVMe probe 00:06:49.156 Attaching to 0000:d8:00.0 00:06:49.156 Attached to 0000:d8:00.0 00:06:49.156 Cleaning up... 00:06:49.156 00:06:49.156 real 0m5.423s 00:06:49.156 user 0m4.000s 00:06:49.156 sys 0m0.480s 00:06:49.156 13:06:29 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.156 13:06:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:49.156 ************************************ 00:06:49.156 END TEST env_dpdk_post_init 00:06:49.156 ************************************ 00:06:49.156 13:06:29 env -- env/env.sh@26 -- # uname 00:06:49.156 13:06:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:49.156 13:06:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:49.156 13:06:29 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.156 13:06:29 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.156 13:06:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:49.156 ************************************ 00:06:49.156 START TEST env_mem_callbacks 00:06:49.156 ************************************ 00:06:49.156 13:06:29 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:49.156 EAL: Detected CPU lcores: 112 00:06:49.156 EAL: Detected NUMA nodes: 2 00:06:49.156 EAL: Detected shared linkage of DPDK 00:06:49.156 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:49.156 EAL: Selected IOVA mode 'PA' 00:06:49.156 EAL: VFIO support initialized 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.156 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:06:49.156 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.156 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.157 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:06:49.157 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.157 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:49.158 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:06:49.158 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.158 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:06:49.158 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:49.158 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:06:49.158 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.158 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:06:49.158 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:49.158 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:06:49.158 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.158 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:06:49.158 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:49.158 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:06:49.158 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.158 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:06:49.158 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:49.158 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:06:49.158 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.158 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:06:49.158 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:49.158 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:06:49.158 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:49.158 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:06:49.158 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:49.158 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:49.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.158 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:49.158 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:49.158 00:06:49.158 00:06:49.158 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.158 http://cunit.sourceforge.net/ 00:06:49.158 00:06:49.158 00:06:49.158 Suite: memory 00:06:49.158 Test: test ... 00:06:49.158 register 0x200000200000 2097152 00:06:49.158 malloc 3145728 00:06:49.158 register 0x200000400000 4194304 00:06:49.158 buf 0x200000500000 len 3145728 PASSED 00:06:49.158 malloc 64 00:06:49.158 buf 0x2000004fff40 len 64 PASSED 00:06:49.159 malloc 4194304 00:06:49.159 register 0x200000800000 6291456 00:06:49.159 buf 0x200000a00000 len 4194304 PASSED 00:06:49.159 free 0x200000500000 3145728 00:06:49.159 free 0x2000004fff40 64 00:06:49.159 unregister 0x200000400000 4194304 PASSED 00:06:49.159 free 0x200000a00000 4194304 00:06:49.159 unregister 0x200000800000 6291456 PASSED 00:06:49.159 malloc 8388608 00:06:49.159 register 0x200000400000 10485760 00:06:49.159 buf 0x200000600000 len 8388608 PASSED 00:06:49.159 free 0x200000600000 8388608 00:06:49.159 unregister 0x200000400000 10485760 PASSED 00:06:49.159 passed 00:06:49.159 00:06:49.159 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.159 suites 1 1 n/a 0 0 00:06:49.159 tests 1 1 1 0 0 00:06:49.159 asserts 15 15 15 0 n/a 00:06:49.159 00:06:49.159 Elapsed time = 0.005 seconds 00:06:49.159 00:06:49.159 real 0m0.109s 00:06:49.159 user 0m0.042s 00:06:49.159 sys 0m0.066s 00:06:49.159 13:06:29 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.159 13:06:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:49.159 ************************************ 00:06:49.159 END TEST env_mem_callbacks 00:06:49.159 ************************************ 00:06:49.159 00:06:49.159 real 0m7.562s 00:06:49.159 user 0m5.135s 00:06:49.159 sys 0m1.492s 00:06:49.159 13:06:29 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.159 13:06:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:49.159 ************************************ 00:06:49.159 END TEST env 00:06:49.159 ************************************ 00:06:49.417 13:06:29 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:06:49.417 13:06:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.417 13:06:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.417 13:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:49.417 ************************************ 00:06:49.417 START TEST rpc 00:06:49.417 ************************************ 00:06:49.417 13:06:29 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:06:49.417 * Looking for test storage... 00:06:49.417 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:49.417 13:06:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=601470 00:06:49.417 13:06:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.417 13:06:29 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:49.417 13:06:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 601470 00:06:49.417 13:06:29 rpc -- common/autotest_common.sh@831 -- # '[' -z 601470 ']' 00:06:49.417 13:06:29 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.417 13:06:29 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.417 13:06:29 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.417 13:06:29 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.417 13:06:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.417 [2024-07-26 13:06:29.922525] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:06:49.417 [2024-07-26 13:06:29.922593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid601470 ] 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:49.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.711 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:49.711 [2024-07-26 13:06:30.058746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.711 [2024-07-26 13:06:30.142455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:49.712 [2024-07-26 13:06:30.142506] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 601470' to capture a snapshot of events at runtime. 00:06:49.712 [2024-07-26 13:06:30.142519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.712 [2024-07-26 13:06:30.142531] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.712 [2024-07-26 13:06:30.142542] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid601470 for offline analysis/debug. 00:06:49.712 [2024-07-26 13:06:30.142572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.278 13:06:30 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.279 13:06:30 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:50.279 13:06:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:50.279 13:06:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:50.279 13:06:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:50.279 13:06:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:50.279 13:06:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.279 13:06:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.279 13:06:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.279 ************************************ 00:06:50.279 START TEST rpc_integrity 00:06:50.279 ************************************ 00:06:50.279 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:50.537 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:50.537 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.537 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.537 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.537 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:50.537 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:50.537 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:50.537 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:50.537 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.537 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.537 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.537 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:50.537 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:50.537 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.537 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.537 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.537 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:50.537 { 00:06:50.537 "name": "Malloc0", 00:06:50.537 "aliases": [ 00:06:50.537 "0294bf90-28f3-490c-b1da-ef6e3efe4e55" 00:06:50.537 ], 00:06:50.537 "product_name": "Malloc disk", 00:06:50.537 "block_size": 512, 00:06:50.537 "num_blocks": 16384, 00:06:50.537 "uuid": "0294bf90-28f3-490c-b1da-ef6e3efe4e55", 00:06:50.537 "assigned_rate_limits": { 00:06:50.537 "rw_ios_per_sec": 0, 00:06:50.537 "rw_mbytes_per_sec": 0, 00:06:50.537 "r_mbytes_per_sec": 0, 00:06:50.537 "w_mbytes_per_sec": 0 00:06:50.537 }, 00:06:50.537 "claimed": false, 00:06:50.537 "zoned": false, 00:06:50.537 "supported_io_types": { 00:06:50.537 "read": true, 00:06:50.537 "write": true, 00:06:50.537 "unmap": true, 00:06:50.537 "flush": true, 00:06:50.537 "reset": true, 00:06:50.537 "nvme_admin": false, 00:06:50.537 "nvme_io": false, 00:06:50.537 "nvme_io_md": false, 00:06:50.537 "write_zeroes": true, 00:06:50.537 "zcopy": true, 00:06:50.537 "get_zone_info": false, 00:06:50.537 "zone_management": false, 00:06:50.537 "zone_append": false, 00:06:50.537 "compare": false, 00:06:50.537 "compare_and_write": false, 00:06:50.537 "abort": true, 00:06:50.537 "seek_hole": false, 00:06:50.537 "seek_data": false, 00:06:50.537 "copy": true, 00:06:50.537 "nvme_iov_md": false 00:06:50.537 }, 00:06:50.537 "memory_domains": [ 00:06:50.537 { 00:06:50.537 "dma_device_id": "system", 00:06:50.537 "dma_device_type": 1 00:06:50.537 }, 00:06:50.537 { 00:06:50.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.537 "dma_device_type": 2 00:06:50.537 } 00:06:50.537 ], 00:06:50.537 "driver_specific": {} 00:06:50.538 } 00:06:50.538 ]' 00:06:50.538 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:50.538 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:50.538 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:50.538 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.538 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.538 [2024-07-26 13:06:30.947599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:50.538 [2024-07-26 13:06:30.947638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:50.538 [2024-07-26 13:06:30.947657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1faa5f0 00:06:50.538 [2024-07-26 13:06:30.947669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:50.538 [2024-07-26 13:06:30.949149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:50.538 [2024-07-26 13:06:30.949179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:50.538 Passthru0 00:06:50.538 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.538 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:50.538 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.538 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.538 13:06:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.538 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:50.538 { 00:06:50.538 "name": "Malloc0", 00:06:50.538 "aliases": [ 00:06:50.538 "0294bf90-28f3-490c-b1da-ef6e3efe4e55" 00:06:50.538 ], 00:06:50.538 "product_name": "Malloc disk", 00:06:50.538 "block_size": 512, 00:06:50.538 "num_blocks": 16384, 00:06:50.538 "uuid": "0294bf90-28f3-490c-b1da-ef6e3efe4e55", 00:06:50.538 "assigned_rate_limits": { 00:06:50.538 "rw_ios_per_sec": 0, 00:06:50.538 "rw_mbytes_per_sec": 0, 00:06:50.538 "r_mbytes_per_sec": 0, 00:06:50.538 "w_mbytes_per_sec": 0 00:06:50.538 }, 00:06:50.538 "claimed": true, 00:06:50.538 "claim_type": "exclusive_write", 00:06:50.538 "zoned": false, 00:06:50.538 "supported_io_types": { 00:06:50.538 "read": true, 00:06:50.538 "write": true, 00:06:50.538 "unmap": true, 00:06:50.538 "flush": true, 00:06:50.538 "reset": true, 00:06:50.538 "nvme_admin": false, 00:06:50.538 "nvme_io": false, 00:06:50.538 "nvme_io_md": false, 00:06:50.538 "write_zeroes": true, 00:06:50.538 "zcopy": true, 00:06:50.538 "get_zone_info": false, 00:06:50.538 "zone_management": false, 00:06:50.538 "zone_append": false, 00:06:50.538 "compare": false, 00:06:50.538 "compare_and_write": false, 00:06:50.538 "abort": true, 00:06:50.538 "seek_hole": false, 00:06:50.538 "seek_data": false, 00:06:50.538 "copy": true, 00:06:50.538 "nvme_iov_md": false 00:06:50.538 }, 00:06:50.538 "memory_domains": [ 00:06:50.538 { 00:06:50.538 "dma_device_id": "system", 00:06:50.538 "dma_device_type": 1 00:06:50.538 }, 00:06:50.538 { 00:06:50.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.538 "dma_device_type": 2 00:06:50.538 } 00:06:50.538 ], 00:06:50.538 "driver_specific": {} 00:06:50.538 }, 00:06:50.538 { 00:06:50.538 "name": "Passthru0", 00:06:50.538 "aliases": [ 00:06:50.538 "4d231575-15df-5c52-ae3b-770b391943b0" 00:06:50.538 ], 00:06:50.538 "product_name": "passthru", 00:06:50.538 "block_size": 512, 00:06:50.538 "num_blocks": 16384, 00:06:50.538 "uuid": "4d231575-15df-5c52-ae3b-770b391943b0", 00:06:50.538 "assigned_rate_limits": { 00:06:50.538 "rw_ios_per_sec": 0, 00:06:50.538 "rw_mbytes_per_sec": 0, 00:06:50.538 "r_mbytes_per_sec": 0, 00:06:50.538 "w_mbytes_per_sec": 0 00:06:50.538 }, 00:06:50.538 "claimed": false, 00:06:50.538 "zoned": false, 00:06:50.538 "supported_io_types": { 00:06:50.538 "read": true, 00:06:50.538 "write": true, 00:06:50.538 "unmap": true, 00:06:50.538 "flush": true, 00:06:50.538 "reset": true, 00:06:50.538 "nvme_admin": false, 00:06:50.538 "nvme_io": false, 00:06:50.538 "nvme_io_md": false, 00:06:50.538 "write_zeroes": true, 00:06:50.538 "zcopy": true, 00:06:50.538 "get_zone_info": false, 00:06:50.538 "zone_management": false, 00:06:50.538 "zone_append": false, 00:06:50.538 "compare": false, 00:06:50.538 "compare_and_write": false, 00:06:50.538 "abort": true, 00:06:50.538 "seek_hole": false, 00:06:50.538 "seek_data": false, 00:06:50.538 "copy": true, 00:06:50.538 "nvme_iov_md": false 00:06:50.538 }, 00:06:50.538 "memory_domains": [ 00:06:50.538 { 00:06:50.538 "dma_device_id": "system", 00:06:50.538 "dma_device_type": 1 00:06:50.538 }, 00:06:50.538 { 00:06:50.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.538 "dma_device_type": 2 00:06:50.538 } 00:06:50.538 ], 00:06:50.538 "driver_specific": { 00:06:50.538 "passthru": { 00:06:50.538 "name": "Passthru0", 00:06:50.538 "base_bdev_name": "Malloc0" 00:06:50.538 } 00:06:50.538 } 00:06:50.538 } 00:06:50.538 ]' 00:06:50.538 13:06:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:50.538 13:06:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:50.538 13:06:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:50.538 13:06:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.538 13:06:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.538 13:06:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.538 13:06:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:50.538 13:06:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.538 13:06:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.538 13:06:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.538 13:06:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:50.538 13:06:31 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.538 13:06:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.538 13:06:31 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.538 13:06:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:50.538 13:06:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:50.796 13:06:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:50.796 00:06:50.796 real 0m0.294s 00:06:50.796 user 0m0.191s 00:06:50.796 sys 0m0.050s 00:06:50.796 13:06:31 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.796 13:06:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:50.797 ************************************ 00:06:50.797 END TEST rpc_integrity 00:06:50.797 ************************************ 00:06:50.797 13:06:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:50.797 13:06:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.797 13:06:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.797 13:06:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.797 ************************************ 00:06:50.797 START TEST rpc_plugins 00:06:50.797 ************************************ 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:50.797 13:06:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.797 13:06:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:50.797 13:06:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.797 13:06:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:50.797 { 00:06:50.797 "name": "Malloc1", 00:06:50.797 "aliases": [ 00:06:50.797 "b42efbb3-b97a-47f9-a69d-81e7bfe956e4" 00:06:50.797 ], 00:06:50.797 "product_name": "Malloc disk", 00:06:50.797 "block_size": 4096, 00:06:50.797 "num_blocks": 256, 00:06:50.797 "uuid": "b42efbb3-b97a-47f9-a69d-81e7bfe956e4", 00:06:50.797 "assigned_rate_limits": { 00:06:50.797 "rw_ios_per_sec": 0, 00:06:50.797 "rw_mbytes_per_sec": 0, 00:06:50.797 "r_mbytes_per_sec": 0, 00:06:50.797 "w_mbytes_per_sec": 0 00:06:50.797 }, 00:06:50.797 "claimed": false, 00:06:50.797 "zoned": false, 00:06:50.797 "supported_io_types": { 00:06:50.797 "read": true, 00:06:50.797 "write": true, 00:06:50.797 "unmap": true, 00:06:50.797 "flush": true, 00:06:50.797 "reset": true, 00:06:50.797 "nvme_admin": false, 00:06:50.797 "nvme_io": false, 00:06:50.797 "nvme_io_md": false, 00:06:50.797 "write_zeroes": true, 00:06:50.797 "zcopy": true, 00:06:50.797 "get_zone_info": false, 00:06:50.797 "zone_management": false, 00:06:50.797 "zone_append": false, 00:06:50.797 "compare": false, 00:06:50.797 "compare_and_write": false, 00:06:50.797 "abort": true, 00:06:50.797 "seek_hole": false, 00:06:50.797 "seek_data": false, 00:06:50.797 "copy": true, 00:06:50.797 "nvme_iov_md": false 00:06:50.797 }, 00:06:50.797 "memory_domains": [ 00:06:50.797 { 00:06:50.797 "dma_device_id": "system", 00:06:50.797 "dma_device_type": 1 00:06:50.797 }, 00:06:50.797 { 00:06:50.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.797 "dma_device_type": 2 00:06:50.797 } 00:06:50.797 ], 00:06:50.797 "driver_specific": {} 00:06:50.797 } 00:06:50.797 ]' 00:06:50.797 13:06:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:50.797 13:06:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:50.797 13:06:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.797 13:06:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:50.797 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.797 13:06:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:50.797 13:06:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:51.055 13:06:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:51.055 00:06:51.055 real 0m0.147s 00:06:51.055 user 0m0.092s 00:06:51.055 sys 0m0.023s 00:06:51.055 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.055 13:06:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:51.055 ************************************ 00:06:51.055 END TEST rpc_plugins 00:06:51.055 ************************************ 00:06:51.055 13:06:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:51.055 13:06:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.055 13:06:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.055 13:06:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.055 ************************************ 00:06:51.055 START TEST rpc_trace_cmd_test 00:06:51.055 ************************************ 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:51.055 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid601470", 00:06:51.055 "tpoint_group_mask": "0x8", 00:06:51.055 "iscsi_conn": { 00:06:51.055 "mask": "0x2", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 }, 00:06:51.055 "scsi": { 00:06:51.055 "mask": "0x4", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 }, 00:06:51.055 "bdev": { 00:06:51.055 "mask": "0x8", 00:06:51.055 "tpoint_mask": "0xffffffffffffffff" 00:06:51.055 }, 00:06:51.055 "nvmf_rdma": { 00:06:51.055 "mask": "0x10", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 }, 00:06:51.055 "nvmf_tcp": { 00:06:51.055 "mask": "0x20", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 }, 00:06:51.055 "ftl": { 00:06:51.055 "mask": "0x40", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 }, 00:06:51.055 "blobfs": { 00:06:51.055 "mask": "0x80", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 }, 00:06:51.055 "dsa": { 00:06:51.055 "mask": "0x200", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 }, 00:06:51.055 "thread": { 00:06:51.055 "mask": "0x400", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 }, 00:06:51.055 "nvme_pcie": { 00:06:51.055 "mask": "0x800", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 }, 00:06:51.055 "iaa": { 00:06:51.055 "mask": "0x1000", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 }, 00:06:51.055 "nvme_tcp": { 00:06:51.055 "mask": "0x2000", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 }, 00:06:51.055 "bdev_nvme": { 00:06:51.055 "mask": "0x4000", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 }, 00:06:51.055 "sock": { 00:06:51.055 "mask": "0x8000", 00:06:51.055 "tpoint_mask": "0x0" 00:06:51.055 } 00:06:51.055 }' 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:51.055 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:51.314 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:51.314 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:51.314 13:06:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:51.314 00:06:51.314 real 0m0.244s 00:06:51.314 user 0m0.199s 00:06:51.314 sys 0m0.038s 00:06:51.314 13:06:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.314 13:06:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.314 ************************************ 00:06:51.314 END TEST rpc_trace_cmd_test 00:06:51.314 ************************************ 00:06:51.314 13:06:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:51.314 13:06:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:51.314 13:06:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:51.314 13:06:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.314 13:06:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.314 13:06:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.314 ************************************ 00:06:51.314 START TEST rpc_daemon_integrity 00:06:51.314 ************************************ 00:06:51.314 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:51.314 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:51.314 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.314 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.314 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.314 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:51.315 { 00:06:51.315 "name": "Malloc2", 00:06:51.315 "aliases": [ 00:06:51.315 "26f82dfc-47e8-4089-9f20-46b4c1059794" 00:06:51.315 ], 00:06:51.315 "product_name": "Malloc disk", 00:06:51.315 "block_size": 512, 00:06:51.315 "num_blocks": 16384, 00:06:51.315 "uuid": "26f82dfc-47e8-4089-9f20-46b4c1059794", 00:06:51.315 "assigned_rate_limits": { 00:06:51.315 "rw_ios_per_sec": 0, 00:06:51.315 "rw_mbytes_per_sec": 0, 00:06:51.315 "r_mbytes_per_sec": 0, 00:06:51.315 "w_mbytes_per_sec": 0 00:06:51.315 }, 00:06:51.315 "claimed": false, 00:06:51.315 "zoned": false, 00:06:51.315 "supported_io_types": { 00:06:51.315 "read": true, 00:06:51.315 "write": true, 00:06:51.315 "unmap": true, 00:06:51.315 "flush": true, 00:06:51.315 "reset": true, 00:06:51.315 "nvme_admin": false, 00:06:51.315 "nvme_io": false, 00:06:51.315 "nvme_io_md": false, 00:06:51.315 "write_zeroes": true, 00:06:51.315 "zcopy": true, 00:06:51.315 "get_zone_info": false, 00:06:51.315 "zone_management": false, 00:06:51.315 "zone_append": false, 00:06:51.315 "compare": false, 00:06:51.315 "compare_and_write": false, 00:06:51.315 "abort": true, 00:06:51.315 "seek_hole": false, 00:06:51.315 "seek_data": false, 00:06:51.315 "copy": true, 00:06:51.315 "nvme_iov_md": false 00:06:51.315 }, 00:06:51.315 "memory_domains": [ 00:06:51.315 { 00:06:51.315 "dma_device_id": "system", 00:06:51.315 "dma_device_type": 1 00:06:51.315 }, 00:06:51.315 { 00:06:51.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.315 "dma_device_type": 2 00:06:51.315 } 00:06:51.315 ], 00:06:51.315 "driver_specific": {} 00:06:51.315 } 00:06:51.315 ]' 00:06:51.315 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.574 [2024-07-26 13:06:31.878212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:51.574 [2024-07-26 13:06:31.878250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.574 [2024-07-26 13:06:31.878268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2155fb0 00:06:51.574 [2024-07-26 13:06:31.878279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.574 [2024-07-26 13:06:31.879534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.574 [2024-07-26 13:06:31.879561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:51.574 Passthru0 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:51.574 { 00:06:51.574 "name": "Malloc2", 00:06:51.574 "aliases": [ 00:06:51.574 "26f82dfc-47e8-4089-9f20-46b4c1059794" 00:06:51.574 ], 00:06:51.574 "product_name": "Malloc disk", 00:06:51.574 "block_size": 512, 00:06:51.574 "num_blocks": 16384, 00:06:51.574 "uuid": "26f82dfc-47e8-4089-9f20-46b4c1059794", 00:06:51.574 "assigned_rate_limits": { 00:06:51.574 "rw_ios_per_sec": 0, 00:06:51.574 "rw_mbytes_per_sec": 0, 00:06:51.574 "r_mbytes_per_sec": 0, 00:06:51.574 "w_mbytes_per_sec": 0 00:06:51.574 }, 00:06:51.574 "claimed": true, 00:06:51.574 "claim_type": "exclusive_write", 00:06:51.574 "zoned": false, 00:06:51.574 "supported_io_types": { 00:06:51.574 "read": true, 00:06:51.574 "write": true, 00:06:51.574 "unmap": true, 00:06:51.574 "flush": true, 00:06:51.574 "reset": true, 00:06:51.574 "nvme_admin": false, 00:06:51.574 "nvme_io": false, 00:06:51.574 "nvme_io_md": false, 00:06:51.574 "write_zeroes": true, 00:06:51.574 "zcopy": true, 00:06:51.574 "get_zone_info": false, 00:06:51.574 "zone_management": false, 00:06:51.574 "zone_append": false, 00:06:51.574 "compare": false, 00:06:51.574 "compare_and_write": false, 00:06:51.574 "abort": true, 00:06:51.574 "seek_hole": false, 00:06:51.574 "seek_data": false, 00:06:51.574 "copy": true, 00:06:51.574 "nvme_iov_md": false 00:06:51.574 }, 00:06:51.574 "memory_domains": [ 00:06:51.574 { 00:06:51.574 "dma_device_id": "system", 00:06:51.574 "dma_device_type": 1 00:06:51.574 }, 00:06:51.574 { 00:06:51.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.574 "dma_device_type": 2 00:06:51.574 } 00:06:51.574 ], 00:06:51.574 "driver_specific": {} 00:06:51.574 }, 00:06:51.574 { 00:06:51.574 "name": "Passthru0", 00:06:51.574 "aliases": [ 00:06:51.574 "b62cbdf2-a5b0-53ac-bd0a-ae7b607a7a09" 00:06:51.574 ], 00:06:51.574 "product_name": "passthru", 00:06:51.574 "block_size": 512, 00:06:51.574 "num_blocks": 16384, 00:06:51.574 "uuid": "b62cbdf2-a5b0-53ac-bd0a-ae7b607a7a09", 00:06:51.574 "assigned_rate_limits": { 00:06:51.574 "rw_ios_per_sec": 0, 00:06:51.574 "rw_mbytes_per_sec": 0, 00:06:51.574 "r_mbytes_per_sec": 0, 00:06:51.574 "w_mbytes_per_sec": 0 00:06:51.574 }, 00:06:51.574 "claimed": false, 00:06:51.574 "zoned": false, 00:06:51.574 "supported_io_types": { 00:06:51.574 "read": true, 00:06:51.574 "write": true, 00:06:51.574 "unmap": true, 00:06:51.574 "flush": true, 00:06:51.574 "reset": true, 00:06:51.574 "nvme_admin": false, 00:06:51.574 "nvme_io": false, 00:06:51.574 "nvme_io_md": false, 00:06:51.574 "write_zeroes": true, 00:06:51.574 "zcopy": true, 00:06:51.574 "get_zone_info": false, 00:06:51.574 "zone_management": false, 00:06:51.574 "zone_append": false, 00:06:51.574 "compare": false, 00:06:51.574 "compare_and_write": false, 00:06:51.574 "abort": true, 00:06:51.574 "seek_hole": false, 00:06:51.574 "seek_data": false, 00:06:51.574 "copy": true, 00:06:51.574 "nvme_iov_md": false 00:06:51.574 }, 00:06:51.574 "memory_domains": [ 00:06:51.574 { 00:06:51.574 "dma_device_id": "system", 00:06:51.574 "dma_device_type": 1 00:06:51.574 }, 00:06:51.574 { 00:06:51.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.574 "dma_device_type": 2 00:06:51.574 } 00:06:51.574 ], 00:06:51.574 "driver_specific": { 00:06:51.574 "passthru": { 00:06:51.574 "name": "Passthru0", 00:06:51.574 "base_bdev_name": "Malloc2" 00:06:51.574 } 00:06:51.574 } 00:06:51.574 } 00:06:51.574 ]' 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:51.574 13:06:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:51.574 13:06:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:51.574 00:06:51.574 real 0m0.298s 00:06:51.574 user 0m0.189s 00:06:51.574 sys 0m0.052s 00:06:51.574 13:06:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.574 13:06:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:51.574 ************************************ 00:06:51.574 END TEST rpc_daemon_integrity 00:06:51.574 ************************************ 00:06:51.574 13:06:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:51.574 13:06:32 rpc -- rpc/rpc.sh@84 -- # killprocess 601470 00:06:51.574 13:06:32 rpc -- common/autotest_common.sh@950 -- # '[' -z 601470 ']' 00:06:51.574 13:06:32 rpc -- common/autotest_common.sh@954 -- # kill -0 601470 00:06:51.574 13:06:32 rpc -- common/autotest_common.sh@955 -- # uname 00:06:51.574 13:06:32 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.574 13:06:32 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 601470 00:06:51.833 13:06:32 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.833 13:06:32 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.833 13:06:32 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 601470' 00:06:51.833 killing process with pid 601470 00:06:51.833 13:06:32 rpc -- common/autotest_common.sh@969 -- # kill 601470 00:06:51.833 13:06:32 rpc -- common/autotest_common.sh@974 -- # wait 601470 00:06:52.092 00:06:52.092 real 0m2.717s 00:06:52.092 user 0m3.445s 00:06:52.092 sys 0m0.892s 00:06:52.092 13:06:32 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.092 13:06:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.092 ************************************ 00:06:52.092 END TEST rpc 00:06:52.092 ************************************ 00:06:52.092 13:06:32 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:52.092 13:06:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.092 13:06:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.092 13:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:52.092 ************************************ 00:06:52.092 START TEST skip_rpc 00:06:52.092 ************************************ 00:06:52.092 13:06:32 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:52.351 * Looking for test storage... 00:06:52.351 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:52.351 13:06:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:52.351 13:06:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:06:52.351 13:06:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:52.351 13:06:32 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.351 13:06:32 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.351 13:06:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.351 ************************************ 00:06:52.351 START TEST skip_rpc 00:06:52.351 ************************************ 00:06:52.351 13:06:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:52.351 13:06:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=602131 00:06:52.351 13:06:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.351 13:06:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:52.351 13:06:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:52.351 [2024-07-26 13:06:32.815534] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:06:52.351 [2024-07-26 13:06:32.815660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602131 ] 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.609 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:52.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.610 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:52.610 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:52.610 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:52.610 [2024-07-26 13:06:33.024183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.610 [2024-07-26 13:06:33.111512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 602131 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 602131 ']' 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 602131 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 602131 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 602131' 00:06:57.872 killing process with pid 602131 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 602131 00:06:57.872 13:06:37 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 602131 00:06:57.872 00:06:57.872 real 0m5.397s 00:06:57.872 user 0m5.002s 00:06:57.872 sys 0m0.410s 00:06:57.872 13:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.872 13:06:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.872 ************************************ 00:06:57.872 END TEST skip_rpc 00:06:57.872 ************************************ 00:06:57.872 13:06:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:57.872 13:06:38 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.872 13:06:38 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.872 13:06:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.872 ************************************ 00:06:57.872 START TEST skip_rpc_with_json 00:06:57.872 ************************************ 00:06:57.872 13:06:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:57.872 13:06:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:57.872 13:06:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=603206 00:06:57.872 13:06:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:57.872 13:06:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 603206 00:06:57.872 13:06:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 603206 ']' 00:06:57.872 13:06:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.873 13:06:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.873 13:06:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.873 13:06:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.873 13:06:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.873 13:06:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:57.873 [2024-07-26 13:06:38.236852] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:06:57.873 [2024-07-26 13:06:38.236912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid603206 ] 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:57.873 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:57.873 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:57.873 [2024-07-26 13:06:38.370482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.131 [2024-07-26 13:06:38.457299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:58.698 [2024-07-26 13:06:39.127196] nvmf_rpc.c:2573:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:58.698 request: 00:06:58.698 { 00:06:58.698 "trtype": "tcp", 00:06:58.698 "method": "nvmf_get_transports", 00:06:58.698 "req_id": 1 00:06:58.698 } 00:06:58.698 Got JSON-RPC error response 00:06:58.698 response: 00:06:58.698 { 00:06:58.698 "code": -19, 00:06:58.698 "message": "No such device" 00:06:58.698 } 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:58.698 [2024-07-26 13:06:39.135329] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.698 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:58.957 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.957 13:06:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:58.957 { 00:06:58.957 "subsystems": [ 00:06:58.957 { 00:06:58.957 "subsystem": "keyring", 00:06:58.957 "config": [] 00:06:58.957 }, 00:06:58.957 { 00:06:58.957 "subsystem": "iobuf", 00:06:58.957 "config": [ 00:06:58.957 { 00:06:58.957 "method": "iobuf_set_options", 00:06:58.957 "params": { 00:06:58.957 "small_pool_count": 8192, 00:06:58.957 "large_pool_count": 1024, 00:06:58.957 "small_bufsize": 8192, 00:06:58.957 "large_bufsize": 135168 00:06:58.957 } 00:06:58.957 } 00:06:58.957 ] 00:06:58.957 }, 00:06:58.957 { 00:06:58.957 "subsystem": "sock", 00:06:58.957 "config": [ 00:06:58.957 { 00:06:58.957 "method": "sock_set_default_impl", 00:06:58.957 "params": { 00:06:58.957 "impl_name": "posix" 00:06:58.957 } 00:06:58.957 }, 00:06:58.957 { 00:06:58.957 "method": "sock_impl_set_options", 00:06:58.957 "params": { 00:06:58.957 "impl_name": "ssl", 00:06:58.957 "recv_buf_size": 4096, 00:06:58.957 "send_buf_size": 4096, 00:06:58.957 "enable_recv_pipe": true, 00:06:58.957 "enable_quickack": false, 00:06:58.957 "enable_placement_id": 0, 00:06:58.957 "enable_zerocopy_send_server": true, 00:06:58.957 "enable_zerocopy_send_client": false, 00:06:58.957 "zerocopy_threshold": 0, 00:06:58.957 "tls_version": 0, 00:06:58.957 "enable_ktls": false 00:06:58.957 } 00:06:58.957 }, 00:06:58.957 { 00:06:58.957 "method": "sock_impl_set_options", 00:06:58.957 "params": { 00:06:58.957 "impl_name": "posix", 00:06:58.957 "recv_buf_size": 2097152, 00:06:58.957 "send_buf_size": 2097152, 00:06:58.957 "enable_recv_pipe": true, 00:06:58.957 "enable_quickack": false, 00:06:58.957 "enable_placement_id": 0, 00:06:58.957 "enable_zerocopy_send_server": true, 00:06:58.957 "enable_zerocopy_send_client": false, 00:06:58.957 "zerocopy_threshold": 0, 00:06:58.957 "tls_version": 0, 00:06:58.957 "enable_ktls": false 00:06:58.957 } 00:06:58.957 } 00:06:58.957 ] 00:06:58.957 }, 00:06:58.957 { 00:06:58.957 "subsystem": "vmd", 00:06:58.957 "config": [] 00:06:58.957 }, 00:06:58.957 { 00:06:58.957 "subsystem": "accel", 00:06:58.957 "config": [ 00:06:58.957 { 00:06:58.957 "method": "accel_set_options", 00:06:58.957 "params": { 00:06:58.957 "small_cache_size": 128, 00:06:58.957 "large_cache_size": 16, 00:06:58.957 "task_count": 2048, 00:06:58.957 "sequence_count": 2048, 00:06:58.957 "buf_count": 2048 00:06:58.957 } 00:06:58.957 } 00:06:58.957 ] 00:06:58.957 }, 00:06:58.957 { 00:06:58.957 "subsystem": "bdev", 00:06:58.957 "config": [ 00:06:58.957 { 00:06:58.957 "method": "bdev_set_options", 00:06:58.957 "params": { 00:06:58.957 "bdev_io_pool_size": 65535, 00:06:58.957 "bdev_io_cache_size": 256, 00:06:58.957 "bdev_auto_examine": true, 00:06:58.957 "iobuf_small_cache_size": 128, 00:06:58.957 "iobuf_large_cache_size": 16 00:06:58.957 } 00:06:58.957 }, 00:06:58.957 { 00:06:58.957 "method": "bdev_raid_set_options", 00:06:58.957 "params": { 00:06:58.957 "process_window_size_kb": 1024, 00:06:58.957 "process_max_bandwidth_mb_sec": 0 00:06:58.957 } 00:06:58.957 }, 00:06:58.957 { 00:06:58.957 "method": "bdev_iscsi_set_options", 00:06:58.957 "params": { 00:06:58.957 "timeout_sec": 30 00:06:58.957 } 00:06:58.957 }, 00:06:58.957 { 00:06:58.957 "method": "bdev_nvme_set_options", 00:06:58.957 "params": { 00:06:58.957 "action_on_timeout": "none", 00:06:58.957 "timeout_us": 0, 00:06:58.957 "timeout_admin_us": 0, 00:06:58.957 "keep_alive_timeout_ms": 10000, 00:06:58.957 "arbitration_burst": 0, 00:06:58.957 "low_priority_weight": 0, 00:06:58.957 "medium_priority_weight": 0, 00:06:58.957 "high_priority_weight": 0, 00:06:58.957 "nvme_adminq_poll_period_us": 10000, 00:06:58.957 "nvme_ioq_poll_period_us": 0, 00:06:58.957 "io_queue_requests": 0, 00:06:58.957 "delay_cmd_submit": true, 00:06:58.957 "transport_retry_count": 4, 00:06:58.957 "bdev_retry_count": 3, 00:06:58.957 "transport_ack_timeout": 0, 00:06:58.957 "ctrlr_loss_timeout_sec": 0, 00:06:58.957 "reconnect_delay_sec": 0, 00:06:58.957 "fast_io_fail_timeout_sec": 0, 00:06:58.957 "disable_auto_failback": false, 00:06:58.957 "generate_uuids": false, 00:06:58.957 "transport_tos": 0, 00:06:58.957 "nvme_error_stat": false, 00:06:58.957 "rdma_srq_size": 0, 00:06:58.957 "io_path_stat": false, 00:06:58.957 "allow_accel_sequence": false, 00:06:58.957 "rdma_max_cq_size": 0, 00:06:58.957 "rdma_cm_event_timeout_ms": 0, 00:06:58.957 "dhchap_digests": [ 00:06:58.957 "sha256", 00:06:58.958 "sha384", 00:06:58.958 "sha512" 00:06:58.958 ], 00:06:58.958 "dhchap_dhgroups": [ 00:06:58.958 "null", 00:06:58.958 "ffdhe2048", 00:06:58.958 "ffdhe3072", 00:06:58.958 "ffdhe4096", 00:06:58.958 "ffdhe6144", 00:06:58.958 "ffdhe8192" 00:06:58.958 ] 00:06:58.958 } 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "method": "bdev_nvme_set_hotplug", 00:06:58.958 "params": { 00:06:58.958 "period_us": 100000, 00:06:58.958 "enable": false 00:06:58.958 } 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "method": "bdev_wait_for_examine" 00:06:58.958 } 00:06:58.958 ] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "subsystem": "scsi", 00:06:58.958 "config": null 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "subsystem": "scheduler", 00:06:58.958 "config": [ 00:06:58.958 { 00:06:58.958 "method": "framework_set_scheduler", 00:06:58.958 "params": { 00:06:58.958 "name": "static" 00:06:58.958 } 00:06:58.958 } 00:06:58.958 ] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "subsystem": "vhost_scsi", 00:06:58.958 "config": [] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "subsystem": "vhost_blk", 00:06:58.958 "config": [] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "subsystem": "ublk", 00:06:58.958 "config": [] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "subsystem": "nbd", 00:06:58.958 "config": [] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "subsystem": "nvmf", 00:06:58.958 "config": [ 00:06:58.958 { 00:06:58.958 "method": "nvmf_set_config", 00:06:58.958 "params": { 00:06:58.958 "discovery_filter": "match_any", 00:06:58.958 "admin_cmd_passthru": { 00:06:58.958 "identify_ctrlr": false 00:06:58.958 } 00:06:58.958 } 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "method": "nvmf_set_max_subsystems", 00:06:58.958 "params": { 00:06:58.958 "max_subsystems": 1024 00:06:58.958 } 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "method": "nvmf_set_crdt", 00:06:58.958 "params": { 00:06:58.958 "crdt1": 0, 00:06:58.958 "crdt2": 0, 00:06:58.958 "crdt3": 0 00:06:58.958 } 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "method": "nvmf_create_transport", 00:06:58.958 "params": { 00:06:58.958 "trtype": "TCP", 00:06:58.958 "max_queue_depth": 128, 00:06:58.958 "max_io_qpairs_per_ctrlr": 127, 00:06:58.958 "in_capsule_data_size": 4096, 00:06:58.958 "max_io_size": 131072, 00:06:58.958 "io_unit_size": 131072, 00:06:58.958 "max_aq_depth": 128, 00:06:58.958 "num_shared_buffers": 511, 00:06:58.958 "buf_cache_size": 4294967295, 00:06:58.958 "dif_insert_or_strip": false, 00:06:58.958 "zcopy": false, 00:06:58.958 "c2h_success": true, 00:06:58.958 "sock_priority": 0, 00:06:58.958 "abort_timeout_sec": 1, 00:06:58.958 "ack_timeout": 0, 00:06:58.958 "data_wr_pool_size": 0 00:06:58.958 } 00:06:58.958 } 00:06:58.958 ] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "subsystem": "iscsi", 00:06:58.958 "config": [ 00:06:58.958 { 00:06:58.958 "method": "iscsi_set_options", 00:06:58.958 "params": { 00:06:58.958 "node_base": "iqn.2016-06.io.spdk", 00:06:58.958 "max_sessions": 128, 00:06:58.958 "max_connections_per_session": 2, 00:06:58.958 "max_queue_depth": 64, 00:06:58.958 "default_time2wait": 2, 00:06:58.958 "default_time2retain": 20, 00:06:58.958 "first_burst_length": 8192, 00:06:58.958 "immediate_data": true, 00:06:58.958 "allow_duplicated_isid": false, 00:06:58.958 "error_recovery_level": 0, 00:06:58.958 "nop_timeout": 60, 00:06:58.958 "nop_in_interval": 30, 00:06:58.958 "disable_chap": false, 00:06:58.958 "require_chap": false, 00:06:58.958 "mutual_chap": false, 00:06:58.958 "chap_group": 0, 00:06:58.958 "max_large_datain_per_connection": 64, 00:06:58.958 "max_r2t_per_connection": 4, 00:06:58.958 "pdu_pool_size": 36864, 00:06:58.958 "immediate_data_pool_size": 16384, 00:06:58.958 "data_out_pool_size": 2048 00:06:58.958 } 00:06:58.958 } 00:06:58.958 ] 00:06:58.958 } 00:06:58.958 ] 00:06:58.958 } 00:06:58.958 13:06:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:58.958 13:06:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 603206 00:06:58.958 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 603206 ']' 00:06:58.958 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 603206 00:06:58.958 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:58.958 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.958 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 603206 00:06:58.958 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.958 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.958 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 603206' 00:06:58.958 killing process with pid 603206 00:06:58.958 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 603206 00:06:58.958 13:06:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 603206 00:06:59.216 13:06:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=603474 00:06:59.216 13:06:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:59.217 13:06:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:07:04.482 13:06:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 603474 00:07:04.482 13:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 603474 ']' 00:07:04.482 13:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 603474 00:07:04.482 13:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:04.482 13:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.482 13:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 603474 00:07:04.482 13:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.482 13:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.482 13:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 603474' 00:07:04.482 killing process with pid 603474 00:07:04.482 13:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 603474 00:07:04.482 13:06:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 603474 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:07:04.741 00:07:04.741 real 0m6.900s 00:07:04.741 user 0m6.547s 00:07:04.741 sys 0m0.870s 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:04.741 ************************************ 00:07:04.741 END TEST skip_rpc_with_json 00:07:04.741 ************************************ 00:07:04.741 13:06:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:04.741 13:06:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.741 13:06:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.741 13:06:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.741 ************************************ 00:07:04.741 START TEST skip_rpc_with_delay 00:07:04.741 ************************************ 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:04.741 [2024-07-26 13:06:45.213284] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:04.741 [2024-07-26 13:06:45.213368] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.741 00:07:04.741 real 0m0.084s 00:07:04.741 user 0m0.053s 00:07:04.741 sys 0m0.029s 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.741 13:06:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:04.741 ************************************ 00:07:04.741 END TEST skip_rpc_with_delay 00:07:04.741 ************************************ 00:07:05.000 13:06:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:05.000 13:06:45 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:05.000 13:06:45 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:05.000 13:06:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.000 13:06:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.000 13:06:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.000 ************************************ 00:07:05.000 START TEST exit_on_failed_rpc_init 00:07:05.000 ************************************ 00:07:05.000 13:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:05.000 13:06:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=604388 00:07:05.000 13:06:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 604388 00:07:05.000 13:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 604388 ']' 00:07:05.000 13:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.000 13:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.000 13:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.000 13:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.000 13:06:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:05.000 13:06:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.000 [2024-07-26 13:06:45.376217] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:05.000 [2024-07-26 13:06:45.376273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604388 ] 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.000 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:05.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:05.001 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.001 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:05.001 [2024-07-26 13:06:45.509822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.260 [2024-07-26 13:06:45.597856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:05.828 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:05.828 [2024-07-26 13:06:46.306022] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:05.828 [2024-07-26 13:06:46.306084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604591 ] 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:06.087 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:06.087 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:06.087 [2024-07-26 13:06:46.424823] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.087 [2024-07-26 13:06:46.508354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.087 [2024-07-26 13:06:46.508429] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:06.087 [2024-07-26 13:06:46.508445] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:06.087 [2024-07-26 13:06:46.508456] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.087 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:06.087 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.087 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:06.087 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:06.087 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:06.087 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.087 13:06:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:06.087 13:06:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 604388 00:07:06.087 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 604388 ']' 00:07:06.087 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 604388 00:07:06.088 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:06.088 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.088 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 604388 00:07:06.346 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.346 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.346 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 604388' 00:07:06.347 killing process with pid 604388 00:07:06.347 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 604388 00:07:06.347 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 604388 00:07:06.605 00:07:06.605 real 0m1.667s 00:07:06.605 user 0m1.900s 00:07:06.605 sys 0m0.556s 00:07:06.605 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.605 13:06:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:06.605 ************************************ 00:07:06.605 END TEST exit_on_failed_rpc_init 00:07:06.605 ************************************ 00:07:06.605 13:06:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:07:06.605 00:07:06.605 real 0m14.476s 00:07:06.605 user 0m13.643s 00:07:06.605 sys 0m2.185s 00:07:06.605 13:06:47 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.605 13:06:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.605 ************************************ 00:07:06.605 END TEST skip_rpc 00:07:06.605 ************************************ 00:07:06.605 13:06:47 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:06.605 13:06:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.605 13:06:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.605 13:06:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.605 ************************************ 00:07:06.605 START TEST rpc_client 00:07:06.605 ************************************ 00:07:06.605 13:06:47 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:06.864 * Looking for test storage... 00:07:06.864 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:07:06.864 13:06:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:06.864 OK 00:07:06.864 13:06:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:06.864 00:07:06.864 real 0m0.141s 00:07:06.864 user 0m0.059s 00:07:06.864 sys 0m0.093s 00:07:06.864 13:06:47 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.864 13:06:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:06.864 ************************************ 00:07:06.864 END TEST rpc_client 00:07:06.864 ************************************ 00:07:06.864 13:06:47 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:07:06.864 13:06:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.864 13:06:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.864 13:06:47 -- common/autotest_common.sh@10 -- # set +x 00:07:06.864 ************************************ 00:07:06.864 START TEST json_config 00:07:06.864 ************************************ 00:07:06.864 13:06:47 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:07:07.124 13:06:47 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.124 13:06:47 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.124 13:06:47 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.124 13:06:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.124 13:06:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.124 13:06:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.124 13:06:47 json_config -- paths/export.sh@5 -- # export PATH 00:07:07.124 13:06:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@47 -- # : 0 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.124 13:06:47 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:07.124 INFO: JSON configuration test init 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:07.124 13:06:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:07.124 13:06:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:07.124 13:06:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:07.124 13:06:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:07.124 13:06:47 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:07.124 13:06:47 json_config -- json_config/common.sh@9 -- # local app=target 00:07:07.124 13:06:47 json_config -- json_config/common.sh@10 -- # shift 00:07:07.124 13:06:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:07.125 13:06:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:07.125 13:06:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:07.125 13:06:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:07.125 13:06:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:07.125 13:06:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=604968 00:07:07.125 13:06:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:07.125 Waiting for target to run... 00:07:07.125 13:06:47 json_config -- json_config/common.sh@25 -- # waitforlisten 604968 /var/tmp/spdk_tgt.sock 00:07:07.125 13:06:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:07.125 13:06:47 json_config -- common/autotest_common.sh@831 -- # '[' -z 604968 ']' 00:07:07.125 13:06:47 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:07.125 13:06:47 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.125 13:06:47 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:07.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:07.125 13:06:47 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.125 13:06:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:07.125 [2024-07-26 13:06:47.512325] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:07.125 [2024-07-26 13:06:47.512389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid604968 ] 00:07:07.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.383 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:07.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.383 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:07.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.383 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:07.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.383 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:07.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.383 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:07.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.383 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:07.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.383 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:07.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.383 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:07.383 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.383 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:07.384 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:07.384 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:07.384 [2024-07-26 13:06:47.872793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.642 [2024-07-26 13:06:47.949730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.901 13:06:48 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.901 13:06:48 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:07.901 13:06:48 json_config -- json_config/common.sh@26 -- # echo '' 00:07:07.901 00:07:07.901 13:06:48 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:07.901 13:06:48 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:07.901 13:06:48 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:07.901 13:06:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:07.901 13:06:48 json_config -- json_config/json_config.sh@99 -- # [[ 1 -eq 1 ]] 00:07:07.901 13:06:48 json_config -- json_config/json_config.sh@100 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:07:07.901 13:06:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:07:08.160 13:06:48 json_config -- json_config/json_config.sh@101 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:07:08.160 13:06:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:07:08.418 [2024-07-26 13:06:48.840583] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:07:08.418 13:06:48 json_config -- json_config/json_config.sh@102 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:07:08.418 13:06:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:07:08.706 [2024-07-26 13:06:49.065169] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:07:08.706 13:06:49 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:08.706 13:06:49 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:08.706 13:06:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.706 13:06:49 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:08.706 13:06:49 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:08.706 13:06:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:08.965 [2024-07-26 13:06:49.354259] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:14.231 13:06:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:14.231 13:06:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:14.231 13:06:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@51 -- # sort 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:14.231 13:06:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:14.231 13:06:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:07:14.231 13:06:54 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:14.231 13:06:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:07:14.231 13:06:54 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:07:14.232 13:06:54 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:07:14.232 13:06:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:14.232 13:06:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:14.232 13:06:54 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:14.232 13:06:54 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:07:14.232 13:06:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:14.490 13:06:54 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:07:14.490 13:06:54 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:14.490 13:06:54 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:14.490 13:06:54 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:07:14.490 13:06:54 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:07:14.490 13:06:54 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:14.490 13:06:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:14.748 Nvme0n1p0 Nvme0n1p1 00:07:14.748 13:06:55 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:14.748 13:06:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:15.006 [2024-07-26 13:06:55.368480] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:15.006 [2024-07-26 13:06:55.368529] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:15.006 00:07:15.006 13:06:55 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:15.006 13:06:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:15.264 Malloc3 00:07:15.264 13:06:55 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:15.265 13:06:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:15.523 [2024-07-26 13:06:55.805690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:15.523 [2024-07-26 13:06:55.805732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.523 [2024-07-26 13:06:55.805752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2729f00 00:07:15.524 [2024-07-26 13:06:55.805764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.524 [2024-07-26 13:06:55.807102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.524 [2024-07-26 13:06:55.807128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:15.524 PTBdevFromMalloc3 00:07:15.524 13:06:55 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:15.524 13:06:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:15.524 Null0 00:07:15.524 13:06:56 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:15.524 13:06:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:15.782 Malloc0 00:07:15.782 13:06:56 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:15.782 13:06:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:16.040 Malloc1 00:07:16.040 13:06:56 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:16.040 13:06:56 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:16.299 102400+0 records in 00:07:16.299 102400+0 records out 00:07:16.299 104857600 bytes (105 MB, 100 MiB) copied, 0.283135 s, 370 MB/s 00:07:16.299 13:06:56 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:16.299 13:06:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:16.557 aio_disk 00:07:16.557 13:06:57 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:16.557 13:06:57 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:16.557 13:06:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:20.743 021af38e-74e5-47df-a4f5-ea8a9a108f0e 00:07:20.743 13:07:01 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:20.743 13:07:01 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:20.743 13:07:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:21.002 13:07:01 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:21.002 13:07:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:21.261 13:07:01 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:21.261 13:07:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:21.519 13:07:01 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:21.519 13:07:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:21.778 13:07:02 json_config -- json_config/json_config.sh@161 -- # [[ 1 -eq 1 ]] 00:07:21.778 13:07:02 json_config -- json_config/json_config.sh@162 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:07:21.778 13:07:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:07:21.778 MallocForCryptoBdev 00:07:21.778 13:07:02 json_config -- json_config/json_config.sh@163 -- # lspci -d:37c8 00:07:21.778 13:07:02 json_config -- json_config/json_config.sh@163 -- # wc -l 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@163 -- # [[ 5 -eq 0 ]] 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@166 -- # local crypto_driver=crypto_qat 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@169 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:07:22.037 13:07:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:07:22.037 [2024-07-26 13:07:02.531075] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:07:22.037 CryptoMallocBdev 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@173 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:b69c0b9e-60e2-48db-bce5-c68ab07f0d29 bdev_register:bbee0d51-87c6-481e-9c79-254a8c87aed9 bdev_register:7bb9e280-8284-45ed-85a7-3984c0f67418 bdev_register:5a051d4a-7b96-4736-b397-34f5945d1074 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:b69c0b9e-60e2-48db-bce5-c68ab07f0d29 bdev_register:bbee0d51-87c6-481e-9c79-254a8c87aed9 bdev_register:7bb9e280-8284-45ed-85a7-3984c0f67418 bdev_register:5a051d4a-7b96-4736-b397-34f5945d1074 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@75 -- # sort 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@76 -- # sort 00:07:22.037 13:07:02 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:07:22.296 13:07:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:b69c0b9e-60e2-48db-bce5-c68ab07f0d29 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:bbee0d51-87c6-481e-9c79-254a8c87aed9 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:7bb9e280-8284-45ed-85a7-3984c0f67418 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:5a051d4a-7b96-4736-b397-34f5945d1074 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:MallocForCryptoBdev 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:CryptoMallocBdev 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:5a051d4a-7b96-4736-b397-34f5945d1074 bdev_register:7bb9e280-8284-45ed-85a7-3984c0f67418 bdev_register:aio_disk bdev_register:b69c0b9e-60e2-48db-bce5-c68ab07f0d29 bdev_register:bbee0d51-87c6-481e-9c79-254a8c87aed9 bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\a\0\5\1\d\4\a\-\7\b\9\6\-\4\7\3\6\-\b\3\9\7\-\3\4\f\5\9\4\5\d\1\0\7\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\b\b\9\e\2\8\0\-\8\2\8\4\-\4\5\e\d\-\8\5\a\7\-\3\9\8\4\c\0\f\6\7\4\1\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\6\9\c\0\b\9\e\-\6\0\e\2\-\4\8\d\b\-\b\c\e\5\-\c\6\8\a\b\0\7\f\0\d\2\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\b\e\e\0\d\5\1\-\8\7\c\6\-\4\8\1\e\-\9\c\7\9\-\2\5\4\a\8\c\8\7\a\e\d\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@90 -- # cat 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:5a051d4a-7b96-4736-b397-34f5945d1074 bdev_register:7bb9e280-8284-45ed-85a7-3984c0f67418 bdev_register:aio_disk bdev_register:b69c0b9e-60e2-48db-bce5-c68ab07f0d29 bdev_register:bbee0d51-87c6-481e-9c79-254a8c87aed9 bdev_register:CryptoMallocBdev bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:07:22.296 Expected events matched: 00:07:22.296 bdev_register:5a051d4a-7b96-4736-b397-34f5945d1074 00:07:22.296 bdev_register:7bb9e280-8284-45ed-85a7-3984c0f67418 00:07:22.296 bdev_register:aio_disk 00:07:22.296 bdev_register:b69c0b9e-60e2-48db-bce5-c68ab07f0d29 00:07:22.296 bdev_register:bbee0d51-87c6-481e-9c79-254a8c87aed9 00:07:22.296 bdev_register:CryptoMallocBdev 00:07:22.296 bdev_register:Malloc0 00:07:22.296 bdev_register:Malloc0p0 00:07:22.296 bdev_register:Malloc0p1 00:07:22.296 bdev_register:Malloc0p2 00:07:22.296 bdev_register:Malloc1 00:07:22.296 bdev_register:Malloc3 00:07:22.296 bdev_register:MallocForCryptoBdev 00:07:22.296 bdev_register:Null0 00:07:22.296 bdev_register:Nvme0n1 00:07:22.296 bdev_register:Nvme0n1p0 00:07:22.296 bdev_register:Nvme0n1p1 00:07:22.296 bdev_register:PTBdevFromMalloc3 00:07:22.296 13:07:02 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:07:22.296 13:07:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:22.296 13:07:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.555 13:07:02 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:22.555 13:07:02 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:22.555 13:07:02 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:07:22.555 13:07:02 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:22.555 13:07:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:22.555 13:07:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.555 13:07:02 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:22.555 13:07:02 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:22.555 13:07:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:22.813 MallocBdevForConfigChangeCheck 00:07:22.813 13:07:03 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:22.813 13:07:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:22.813 13:07:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:22.813 13:07:03 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:22.813 13:07:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:23.072 13:07:03 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:23.072 INFO: shutting down applications... 00:07:23.072 13:07:03 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:23.072 13:07:03 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:23.072 13:07:03 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:23.072 13:07:03 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:23.330 [2024-07-26 13:07:03.650497] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:25.859 Calling clear_iscsi_subsystem 00:07:25.859 Calling clear_nvmf_subsystem 00:07:25.859 Calling clear_nbd_subsystem 00:07:25.859 Calling clear_ublk_subsystem 00:07:25.859 Calling clear_vhost_blk_subsystem 00:07:25.859 Calling clear_vhost_scsi_subsystem 00:07:25.859 Calling clear_bdev_subsystem 00:07:25.859 13:07:06 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:07:25.859 13:07:06 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:25.859 13:07:06 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:25.859 13:07:06 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:25.859 13:07:06 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:25.859 13:07:06 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:26.117 13:07:06 json_config -- json_config/json_config.sh@349 -- # break 00:07:26.117 13:07:06 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:26.117 13:07:06 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:26.117 13:07:06 json_config -- json_config/common.sh@31 -- # local app=target 00:07:26.117 13:07:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:26.117 13:07:06 json_config -- json_config/common.sh@35 -- # [[ -n 604968 ]] 00:07:26.117 13:07:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 604968 00:07:26.117 13:07:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:26.117 13:07:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:26.117 13:07:06 json_config -- json_config/common.sh@41 -- # kill -0 604968 00:07:26.117 13:07:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:26.684 13:07:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:26.684 13:07:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:26.684 13:07:06 json_config -- json_config/common.sh@41 -- # kill -0 604968 00:07:26.684 13:07:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:26.684 13:07:06 json_config -- json_config/common.sh@43 -- # break 00:07:26.684 13:07:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:26.684 13:07:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:26.684 SPDK target shutdown done 00:07:26.684 13:07:06 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:26.684 INFO: relaunching applications... 00:07:26.684 13:07:06 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:26.684 13:07:06 json_config -- json_config/common.sh@9 -- # local app=target 00:07:26.684 13:07:06 json_config -- json_config/common.sh@10 -- # shift 00:07:26.684 13:07:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:26.684 13:07:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:26.684 13:07:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:26.684 13:07:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:26.684 13:07:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:26.684 13:07:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=608406 00:07:26.684 13:07:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:26.684 Waiting for target to run... 00:07:26.684 13:07:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:26.684 13:07:06 json_config -- json_config/common.sh@25 -- # waitforlisten 608406 /var/tmp/spdk_tgt.sock 00:07:26.684 13:07:06 json_config -- common/autotest_common.sh@831 -- # '[' -z 608406 ']' 00:07:26.684 13:07:06 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:26.684 13:07:06 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.684 13:07:06 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:26.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:26.684 13:07:06 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.684 13:07:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.684 [2024-07-26 13:07:07.046463] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:26.684 [2024-07-26 13:07:07.046527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid608406 ] 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:27.291 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:27.291 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:27.291 [2024-07-26 13:07:07.552626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.291 [2024-07-26 13:07:07.653991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.291 [2024-07-26 13:07:07.708042] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:07:27.292 [2024-07-26 13:07:07.716076] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:07:27.292 [2024-07-26 13:07:07.724095] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:07:27.554 [2024-07-26 13:07:07.805093] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:07:29.455 [2024-07-26 13:07:09.944725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:29.455 [2024-07-26 13:07:09.944789] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:29.455 [2024-07-26 13:07:09.944803] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:29.455 [2024-07-26 13:07:09.952741] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:29.455 [2024-07-26 13:07:09.952765] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:29.455 [2024-07-26 13:07:09.960755] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:29.455 [2024-07-26 13:07:09.960778] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:29.455 [2024-07-26 13:07:09.968790] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:07:29.455 [2024-07-26 13:07:09.968815] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:07:29.455 [2024-07-26 13:07:09.968827] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:32.739 [2024-07-26 13:07:12.868637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:32.739 [2024-07-26 13:07:12.868684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.739 [2024-07-26 13:07:12.868699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11679f0 00:07:32.739 [2024-07-26 13:07:12.868711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.739 [2024-07-26 13:07:12.868981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.739 [2024-07-26 13:07:12.869000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:32.739 13:07:13 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.739 13:07:13 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:32.739 13:07:13 json_config -- json_config/common.sh@26 -- # echo '' 00:07:32.739 00:07:32.739 13:07:13 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:32.739 13:07:13 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:32.739 INFO: Checking if target configuration is the same... 00:07:32.739 13:07:13 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:32.739 13:07:13 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:32.739 13:07:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:32.739 + '[' 2 -ne 2 ']' 00:07:32.739 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:32.739 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:07:32.739 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:07:32.739 +++ basename /dev/fd/62 00:07:32.739 ++ mktemp /tmp/62.XXX 00:07:32.739 + tmp_file_1=/tmp/62.f8a 00:07:32.739 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:32.739 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:32.739 + tmp_file_2=/tmp/spdk_tgt_config.json.hQN 00:07:32.739 + ret=0 00:07:32.739 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:32.997 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:33.254 + diff -u /tmp/62.f8a /tmp/spdk_tgt_config.json.hQN 00:07:33.254 + echo 'INFO: JSON config files are the same' 00:07:33.254 INFO: JSON config files are the same 00:07:33.254 + rm /tmp/62.f8a /tmp/spdk_tgt_config.json.hQN 00:07:33.254 + exit 0 00:07:33.254 13:07:13 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:33.254 13:07:13 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:33.254 INFO: changing configuration and checking if this can be detected... 00:07:33.254 13:07:13 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:33.254 13:07:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:33.513 13:07:13 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:33.513 13:07:13 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:33.513 13:07:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:33.513 + '[' 2 -ne 2 ']' 00:07:33.513 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:33.513 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:07:33.513 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:07:33.513 +++ basename /dev/fd/62 00:07:33.513 ++ mktemp /tmp/62.XXX 00:07:33.513 + tmp_file_1=/tmp/62.GyN 00:07:33.513 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:33.513 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:33.513 + tmp_file_2=/tmp/spdk_tgt_config.json.SWq 00:07:33.513 + ret=0 00:07:33.513 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:33.771 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:33.771 + diff -u /tmp/62.GyN /tmp/spdk_tgt_config.json.SWq 00:07:33.771 + ret=1 00:07:33.771 + echo '=== Start of file: /tmp/62.GyN ===' 00:07:33.771 + cat /tmp/62.GyN 00:07:33.771 + echo '=== End of file: /tmp/62.GyN ===' 00:07:33.771 + echo '' 00:07:33.771 + echo '=== Start of file: /tmp/spdk_tgt_config.json.SWq ===' 00:07:33.771 + cat /tmp/spdk_tgt_config.json.SWq 00:07:33.771 + echo '=== End of file: /tmp/spdk_tgt_config.json.SWq ===' 00:07:33.771 + echo '' 00:07:33.771 + rm /tmp/62.GyN /tmp/spdk_tgt_config.json.SWq 00:07:33.771 + exit 1 00:07:33.771 13:07:14 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:33.771 INFO: configuration change detected. 00:07:33.771 13:07:14 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:33.771 13:07:14 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:33.772 13:07:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.772 13:07:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:33.772 13:07:14 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:33.772 13:07:14 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:33.772 13:07:14 json_config -- json_config/json_config.sh@321 -- # [[ -n 608406 ]] 00:07:33.772 13:07:14 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:33.772 13:07:14 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:33.772 13:07:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.772 13:07:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:33.772 13:07:14 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:07:33.772 13:07:14 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:07:33.772 13:07:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:07:34.030 13:07:14 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:07:34.030 13:07:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:07:34.288 13:07:14 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:07:34.288 13:07:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:07:34.546 13:07:14 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:07:34.546 13:07:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:07:34.804 13:07:15 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:34.804 13:07:15 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:34.804 13:07:15 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:34.804 13:07:15 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:34.804 13:07:15 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:34.804 13:07:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.804 13:07:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:34.804 13:07:15 json_config -- json_config/json_config.sh@327 -- # killprocess 608406 00:07:34.804 13:07:15 json_config -- common/autotest_common.sh@950 -- # '[' -z 608406 ']' 00:07:34.804 13:07:15 json_config -- common/autotest_common.sh@954 -- # kill -0 608406 00:07:34.804 13:07:15 json_config -- common/autotest_common.sh@955 -- # uname 00:07:34.804 13:07:15 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.804 13:07:15 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 608406 00:07:34.804 13:07:15 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.804 13:07:15 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.804 13:07:15 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 608406' 00:07:34.804 killing process with pid 608406 00:07:34.804 13:07:15 json_config -- common/autotest_common.sh@969 -- # kill 608406 00:07:34.804 13:07:15 json_config -- common/autotest_common.sh@974 -- # wait 608406 00:07:38.086 13:07:17 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:38.086 13:07:17 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:38.086 13:07:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:38.086 13:07:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.086 13:07:17 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:38.086 13:07:17 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:38.086 INFO: Success 00:07:38.086 00:07:38.086 real 0m30.594s 00:07:38.086 user 0m35.306s 00:07:38.086 sys 0m3.661s 00:07:38.086 13:07:17 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.086 13:07:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:38.086 ************************************ 00:07:38.086 END TEST json_config 00:07:38.086 ************************************ 00:07:38.086 13:07:17 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:38.086 13:07:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.086 13:07:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.086 13:07:17 -- common/autotest_common.sh@10 -- # set +x 00:07:38.086 ************************************ 00:07:38.086 START TEST json_config_extra_key 00:07:38.086 ************************************ 00:07:38.086 13:07:17 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:38.086 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.086 13:07:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:38.086 13:07:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.086 13:07:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.086 13:07:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.086 13:07:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:07:38.087 13:07:18 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.087 13:07:18 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.087 13:07:18 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.087 13:07:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.087 13:07:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.087 13:07:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.087 13:07:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:38.087 13:07:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:38.087 13:07:18 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:38.087 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:07:38.087 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:38.087 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:38.087 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:38.087 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:38.087 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:38.087 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:38.087 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:38.087 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:38.087 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:38.087 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:38.087 INFO: launching applications... 00:07:38.087 13:07:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:07:38.087 13:07:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:38.087 13:07:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:38.087 13:07:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:38.087 13:07:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:38.087 13:07:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:38.087 13:07:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:38.087 13:07:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:38.087 13:07:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=610527 00:07:38.087 13:07:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:38.087 Waiting for target to run... 00:07:38.087 13:07:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 610527 /var/tmp/spdk_tgt.sock 00:07:38.087 13:07:18 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 610527 ']' 00:07:38.087 13:07:18 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:38.087 13:07:18 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:07:38.087 13:07:18 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:38.087 13:07:18 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:38.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:38.087 13:07:18 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:38.087 13:07:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:38.087 [2024-07-26 13:07:18.174416] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:38.087 [2024-07-26 13:07:18.174479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid610527 ] 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:38.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:38.346 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:38.346 [2024-07-26 13:07:18.682800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.346 [2024-07-26 13:07:18.775577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.605 13:07:19 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.605 13:07:19 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:38.605 13:07:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:38.605 00:07:38.605 13:07:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:38.605 INFO: shutting down applications... 00:07:38.605 13:07:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:38.605 13:07:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:38.605 13:07:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:38.605 13:07:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 610527 ]] 00:07:38.605 13:07:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 610527 00:07:38.605 13:07:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:38.605 13:07:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:38.605 13:07:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 610527 00:07:38.605 13:07:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:39.171 13:07:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:39.171 13:07:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:39.171 13:07:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 610527 00:07:39.171 13:07:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:39.171 13:07:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:39.171 13:07:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:39.171 13:07:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:39.171 SPDK target shutdown done 00:07:39.171 13:07:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:39.171 Success 00:07:39.171 00:07:39.171 real 0m1.573s 00:07:39.171 user 0m1.057s 00:07:39.171 sys 0m0.645s 00:07:39.171 13:07:19 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.171 13:07:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:39.171 ************************************ 00:07:39.171 END TEST json_config_extra_key 00:07:39.171 ************************************ 00:07:39.171 13:07:19 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:39.171 13:07:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.171 13:07:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.171 13:07:19 -- common/autotest_common.sh@10 -- # set +x 00:07:39.171 ************************************ 00:07:39.171 START TEST alias_rpc 00:07:39.171 ************************************ 00:07:39.171 13:07:19 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:39.429 * Looking for test storage... 00:07:39.429 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:07:39.429 13:07:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:39.429 13:07:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=610838 00:07:39.429 13:07:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:39.429 13:07:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 610838 00:07:39.429 13:07:19 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 610838 ']' 00:07:39.429 13:07:19 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.429 13:07:19 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:39.429 13:07:19 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.429 13:07:19 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:39.429 13:07:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.429 [2024-07-26 13:07:19.880827] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:39.429 [2024-07-26 13:07:19.880958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid610838 ] 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.688 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:39.688 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.689 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:39.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.689 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:39.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.689 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:39.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.689 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:39.689 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:39.689 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:39.689 [2024-07-26 13:07:20.088904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.689 [2024-07-26 13:07:20.171568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.255 13:07:20 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:40.255 13:07:20 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:40.255 13:07:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:40.513 13:07:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 610838 00:07:40.513 13:07:20 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 610838 ']' 00:07:40.513 13:07:20 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 610838 00:07:40.513 13:07:20 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:40.513 13:07:20 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.513 13:07:20 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 610838 00:07:40.513 13:07:20 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.513 13:07:20 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.513 13:07:20 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 610838' 00:07:40.513 killing process with pid 610838 00:07:40.513 13:07:20 alias_rpc -- common/autotest_common.sh@969 -- # kill 610838 00:07:40.513 13:07:20 alias_rpc -- common/autotest_common.sh@974 -- # wait 610838 00:07:41.079 00:07:41.079 real 0m1.686s 00:07:41.079 user 0m1.764s 00:07:41.079 sys 0m0.604s 00:07:41.079 13:07:21 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.079 13:07:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.079 ************************************ 00:07:41.079 END TEST alias_rpc 00:07:41.079 ************************************ 00:07:41.079 13:07:21 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:41.079 13:07:21 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:41.079 13:07:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.079 13:07:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.079 13:07:21 -- common/autotest_common.sh@10 -- # set +x 00:07:41.079 ************************************ 00:07:41.079 START TEST spdkcli_tcp 00:07:41.079 ************************************ 00:07:41.079 13:07:21 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:41.079 * Looking for test storage... 00:07:41.079 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:07:41.079 13:07:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:07:41.079 13:07:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:41.079 13:07:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:07:41.079 13:07:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:41.079 13:07:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:41.079 13:07:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:41.079 13:07:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:41.079 13:07:21 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.079 13:07:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.079 13:07:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=611160 00:07:41.079 13:07:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 611160 00:07:41.079 13:07:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:41.079 13:07:21 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 611160 ']' 00:07:41.079 13:07:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.079 13:07:21 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.079 13:07:21 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.079 13:07:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.079 13:07:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.079 [2024-07-26 13:07:21.587853] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:41.079 [2024-07-26 13:07:21.587910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611160 ] 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:41.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:41.337 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:41.337 [2024-07-26 13:07:21.722398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:41.337 [2024-07-26 13:07:21.806802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.337 [2024-07-26 13:07:21.806807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.271 13:07:22 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.271 13:07:22 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:07:42.271 13:07:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=611316 00:07:42.271 13:07:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:42.271 13:07:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:42.271 [ 00:07:42.271 "bdev_malloc_delete", 00:07:42.271 "bdev_malloc_create", 00:07:42.271 "bdev_null_resize", 00:07:42.271 "bdev_null_delete", 00:07:42.271 "bdev_null_create", 00:07:42.271 "bdev_nvme_cuse_unregister", 00:07:42.271 "bdev_nvme_cuse_register", 00:07:42.271 "bdev_opal_new_user", 00:07:42.271 "bdev_opal_set_lock_state", 00:07:42.271 "bdev_opal_delete", 00:07:42.271 "bdev_opal_get_info", 00:07:42.271 "bdev_opal_create", 00:07:42.271 "bdev_nvme_opal_revert", 00:07:42.271 "bdev_nvme_opal_init", 00:07:42.271 "bdev_nvme_send_cmd", 00:07:42.271 "bdev_nvme_get_path_iostat", 00:07:42.271 "bdev_nvme_get_mdns_discovery_info", 00:07:42.271 "bdev_nvme_stop_mdns_discovery", 00:07:42.271 "bdev_nvme_start_mdns_discovery", 00:07:42.271 "bdev_nvme_set_multipath_policy", 00:07:42.271 "bdev_nvme_set_preferred_path", 00:07:42.271 "bdev_nvme_get_io_paths", 00:07:42.271 "bdev_nvme_remove_error_injection", 00:07:42.271 "bdev_nvme_add_error_injection", 00:07:42.271 "bdev_nvme_get_discovery_info", 00:07:42.271 "bdev_nvme_stop_discovery", 00:07:42.271 "bdev_nvme_start_discovery", 00:07:42.271 "bdev_nvme_get_controller_health_info", 00:07:42.271 "bdev_nvme_disable_controller", 00:07:42.271 "bdev_nvme_enable_controller", 00:07:42.271 "bdev_nvme_reset_controller", 00:07:42.271 "bdev_nvme_get_transport_statistics", 00:07:42.271 "bdev_nvme_apply_firmware", 00:07:42.271 "bdev_nvme_detach_controller", 00:07:42.271 "bdev_nvme_get_controllers", 00:07:42.271 "bdev_nvme_attach_controller", 00:07:42.271 "bdev_nvme_set_hotplug", 00:07:42.271 "bdev_nvme_set_options", 00:07:42.271 "bdev_passthru_delete", 00:07:42.271 "bdev_passthru_create", 00:07:42.271 "bdev_lvol_set_parent_bdev", 00:07:42.271 "bdev_lvol_set_parent", 00:07:42.271 "bdev_lvol_check_shallow_copy", 00:07:42.271 "bdev_lvol_start_shallow_copy", 00:07:42.272 "bdev_lvol_grow_lvstore", 00:07:42.272 "bdev_lvol_get_lvols", 00:07:42.272 "bdev_lvol_get_lvstores", 00:07:42.272 "bdev_lvol_delete", 00:07:42.272 "bdev_lvol_set_read_only", 00:07:42.272 "bdev_lvol_resize", 00:07:42.272 "bdev_lvol_decouple_parent", 00:07:42.272 "bdev_lvol_inflate", 00:07:42.272 "bdev_lvol_rename", 00:07:42.272 "bdev_lvol_clone_bdev", 00:07:42.272 "bdev_lvol_clone", 00:07:42.272 "bdev_lvol_snapshot", 00:07:42.272 "bdev_lvol_create", 00:07:42.272 "bdev_lvol_delete_lvstore", 00:07:42.272 "bdev_lvol_rename_lvstore", 00:07:42.272 "bdev_lvol_create_lvstore", 00:07:42.272 "bdev_raid_set_options", 00:07:42.272 "bdev_raid_remove_base_bdev", 00:07:42.272 "bdev_raid_add_base_bdev", 00:07:42.272 "bdev_raid_delete", 00:07:42.272 "bdev_raid_create", 00:07:42.272 "bdev_raid_get_bdevs", 00:07:42.272 "bdev_error_inject_error", 00:07:42.272 "bdev_error_delete", 00:07:42.272 "bdev_error_create", 00:07:42.272 "bdev_split_delete", 00:07:42.272 "bdev_split_create", 00:07:42.272 "bdev_delay_delete", 00:07:42.272 "bdev_delay_create", 00:07:42.272 "bdev_delay_update_latency", 00:07:42.272 "bdev_zone_block_delete", 00:07:42.272 "bdev_zone_block_create", 00:07:42.272 "blobfs_create", 00:07:42.272 "blobfs_detect", 00:07:42.272 "blobfs_set_cache_size", 00:07:42.272 "bdev_crypto_delete", 00:07:42.272 "bdev_crypto_create", 00:07:42.272 "bdev_compress_delete", 00:07:42.272 "bdev_compress_create", 00:07:42.272 "bdev_compress_get_orphans", 00:07:42.272 "bdev_aio_delete", 00:07:42.272 "bdev_aio_rescan", 00:07:42.272 "bdev_aio_create", 00:07:42.272 "bdev_ftl_set_property", 00:07:42.272 "bdev_ftl_get_properties", 00:07:42.272 "bdev_ftl_get_stats", 00:07:42.272 "bdev_ftl_unmap", 00:07:42.272 "bdev_ftl_unload", 00:07:42.272 "bdev_ftl_delete", 00:07:42.272 "bdev_ftl_load", 00:07:42.272 "bdev_ftl_create", 00:07:42.272 "bdev_virtio_attach_controller", 00:07:42.272 "bdev_virtio_scsi_get_devices", 00:07:42.272 "bdev_virtio_detach_controller", 00:07:42.272 "bdev_virtio_blk_set_hotplug", 00:07:42.272 "bdev_iscsi_delete", 00:07:42.272 "bdev_iscsi_create", 00:07:42.272 "bdev_iscsi_set_options", 00:07:42.272 "accel_error_inject_error", 00:07:42.272 "ioat_scan_accel_module", 00:07:42.272 "dsa_scan_accel_module", 00:07:42.272 "iaa_scan_accel_module", 00:07:42.272 "dpdk_cryptodev_get_driver", 00:07:42.272 "dpdk_cryptodev_set_driver", 00:07:42.272 "dpdk_cryptodev_scan_accel_module", 00:07:42.272 "compressdev_scan_accel_module", 00:07:42.272 "keyring_file_remove_key", 00:07:42.272 "keyring_file_add_key", 00:07:42.272 "keyring_linux_set_options", 00:07:42.272 "iscsi_get_histogram", 00:07:42.272 "iscsi_enable_histogram", 00:07:42.272 "iscsi_set_options", 00:07:42.272 "iscsi_get_auth_groups", 00:07:42.272 "iscsi_auth_group_remove_secret", 00:07:42.272 "iscsi_auth_group_add_secret", 00:07:42.272 "iscsi_delete_auth_group", 00:07:42.272 "iscsi_create_auth_group", 00:07:42.272 "iscsi_set_discovery_auth", 00:07:42.272 "iscsi_get_options", 00:07:42.272 "iscsi_target_node_request_logout", 00:07:42.272 "iscsi_target_node_set_redirect", 00:07:42.272 "iscsi_target_node_set_auth", 00:07:42.272 "iscsi_target_node_add_lun", 00:07:42.272 "iscsi_get_stats", 00:07:42.272 "iscsi_get_connections", 00:07:42.272 "iscsi_portal_group_set_auth", 00:07:42.272 "iscsi_start_portal_group", 00:07:42.272 "iscsi_delete_portal_group", 00:07:42.272 "iscsi_create_portal_group", 00:07:42.272 "iscsi_get_portal_groups", 00:07:42.272 "iscsi_delete_target_node", 00:07:42.272 "iscsi_target_node_remove_pg_ig_maps", 00:07:42.272 "iscsi_target_node_add_pg_ig_maps", 00:07:42.272 "iscsi_create_target_node", 00:07:42.272 "iscsi_get_target_nodes", 00:07:42.272 "iscsi_delete_initiator_group", 00:07:42.272 "iscsi_initiator_group_remove_initiators", 00:07:42.272 "iscsi_initiator_group_add_initiators", 00:07:42.272 "iscsi_create_initiator_group", 00:07:42.272 "iscsi_get_initiator_groups", 00:07:42.272 "nvmf_set_crdt", 00:07:42.272 "nvmf_set_config", 00:07:42.272 "nvmf_set_max_subsystems", 00:07:42.272 "nvmf_stop_mdns_prr", 00:07:42.272 "nvmf_publish_mdns_prr", 00:07:42.272 "nvmf_subsystem_get_listeners", 00:07:42.272 "nvmf_subsystem_get_qpairs", 00:07:42.272 "nvmf_subsystem_get_controllers", 00:07:42.272 "nvmf_get_stats", 00:07:42.272 "nvmf_get_transports", 00:07:42.272 "nvmf_create_transport", 00:07:42.272 "nvmf_get_targets", 00:07:42.272 "nvmf_delete_target", 00:07:42.272 "nvmf_create_target", 00:07:42.272 "nvmf_subsystem_allow_any_host", 00:07:42.272 "nvmf_subsystem_remove_host", 00:07:42.272 "nvmf_subsystem_add_host", 00:07:42.272 "nvmf_ns_remove_host", 00:07:42.272 "nvmf_ns_add_host", 00:07:42.272 "nvmf_subsystem_remove_ns", 00:07:42.272 "nvmf_subsystem_add_ns", 00:07:42.272 "nvmf_subsystem_listener_set_ana_state", 00:07:42.272 "nvmf_discovery_get_referrals", 00:07:42.272 "nvmf_discovery_remove_referral", 00:07:42.272 "nvmf_discovery_add_referral", 00:07:42.272 "nvmf_subsystem_remove_listener", 00:07:42.272 "nvmf_subsystem_add_listener", 00:07:42.272 "nvmf_delete_subsystem", 00:07:42.272 "nvmf_create_subsystem", 00:07:42.272 "nvmf_get_subsystems", 00:07:42.272 "env_dpdk_get_mem_stats", 00:07:42.272 "nbd_get_disks", 00:07:42.272 "nbd_stop_disk", 00:07:42.272 "nbd_start_disk", 00:07:42.272 "ublk_recover_disk", 00:07:42.272 "ublk_get_disks", 00:07:42.272 "ublk_stop_disk", 00:07:42.272 "ublk_start_disk", 00:07:42.272 "ublk_destroy_target", 00:07:42.272 "ublk_create_target", 00:07:42.272 "virtio_blk_create_transport", 00:07:42.272 "virtio_blk_get_transports", 00:07:42.272 "vhost_controller_set_coalescing", 00:07:42.272 "vhost_get_controllers", 00:07:42.272 "vhost_delete_controller", 00:07:42.272 "vhost_create_blk_controller", 00:07:42.272 "vhost_scsi_controller_remove_target", 00:07:42.272 "vhost_scsi_controller_add_target", 00:07:42.272 "vhost_start_scsi_controller", 00:07:42.272 "vhost_create_scsi_controller", 00:07:42.272 "thread_set_cpumask", 00:07:42.272 "framework_get_governor", 00:07:42.272 "framework_get_scheduler", 00:07:42.272 "framework_set_scheduler", 00:07:42.272 "framework_get_reactors", 00:07:42.272 "thread_get_io_channels", 00:07:42.272 "thread_get_pollers", 00:07:42.272 "thread_get_stats", 00:07:42.272 "framework_monitor_context_switch", 00:07:42.272 "spdk_kill_instance", 00:07:42.272 "log_enable_timestamps", 00:07:42.272 "log_get_flags", 00:07:42.272 "log_clear_flag", 00:07:42.272 "log_set_flag", 00:07:42.272 "log_get_level", 00:07:42.272 "log_set_level", 00:07:42.272 "log_get_print_level", 00:07:42.272 "log_set_print_level", 00:07:42.272 "framework_enable_cpumask_locks", 00:07:42.272 "framework_disable_cpumask_locks", 00:07:42.272 "framework_wait_init", 00:07:42.272 "framework_start_init", 00:07:42.272 "scsi_get_devices", 00:07:42.272 "bdev_get_histogram", 00:07:42.272 "bdev_enable_histogram", 00:07:42.272 "bdev_set_qos_limit", 00:07:42.272 "bdev_set_qd_sampling_period", 00:07:42.272 "bdev_get_bdevs", 00:07:42.272 "bdev_reset_iostat", 00:07:42.272 "bdev_get_iostat", 00:07:42.272 "bdev_examine", 00:07:42.272 "bdev_wait_for_examine", 00:07:42.272 "bdev_set_options", 00:07:42.272 "notify_get_notifications", 00:07:42.272 "notify_get_types", 00:07:42.272 "accel_get_stats", 00:07:42.272 "accel_set_options", 00:07:42.272 "accel_set_driver", 00:07:42.272 "accel_crypto_key_destroy", 00:07:42.272 "accel_crypto_keys_get", 00:07:42.272 "accel_crypto_key_create", 00:07:42.272 "accel_assign_opc", 00:07:42.272 "accel_get_module_info", 00:07:42.272 "accel_get_opc_assignments", 00:07:42.272 "vmd_rescan", 00:07:42.272 "vmd_remove_device", 00:07:42.272 "vmd_enable", 00:07:42.272 "sock_get_default_impl", 00:07:42.272 "sock_set_default_impl", 00:07:42.272 "sock_impl_set_options", 00:07:42.272 "sock_impl_get_options", 00:07:42.272 "iobuf_get_stats", 00:07:42.272 "iobuf_set_options", 00:07:42.272 "framework_get_pci_devices", 00:07:42.272 "framework_get_config", 00:07:42.272 "framework_get_subsystems", 00:07:42.272 "trace_get_info", 00:07:42.272 "trace_get_tpoint_group_mask", 00:07:42.272 "trace_disable_tpoint_group", 00:07:42.272 "trace_enable_tpoint_group", 00:07:42.272 "trace_clear_tpoint_mask", 00:07:42.272 "trace_set_tpoint_mask", 00:07:42.272 "keyring_get_keys", 00:07:42.272 "spdk_get_version", 00:07:42.272 "rpc_get_methods" 00:07:42.272 ] 00:07:42.272 13:07:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:42.272 13:07:22 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.272 13:07:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.272 13:07:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:42.272 13:07:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 611160 00:07:42.272 13:07:22 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 611160 ']' 00:07:42.272 13:07:22 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 611160 00:07:42.272 13:07:22 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:07:42.272 13:07:22 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.272 13:07:22 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 611160 00:07:42.531 13:07:22 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.531 13:07:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.531 13:07:22 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 611160' 00:07:42.531 killing process with pid 611160 00:07:42.531 13:07:22 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 611160 00:07:42.531 13:07:22 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 611160 00:07:42.791 00:07:42.791 real 0m1.724s 00:07:42.791 user 0m3.122s 00:07:42.791 sys 0m0.592s 00:07:42.791 13:07:23 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.791 13:07:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.791 ************************************ 00:07:42.791 END TEST spdkcli_tcp 00:07:42.791 ************************************ 00:07:42.791 13:07:23 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:42.791 13:07:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.791 13:07:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.791 13:07:23 -- common/autotest_common.sh@10 -- # set +x 00:07:42.791 ************************************ 00:07:42.791 START TEST dpdk_mem_utility 00:07:42.791 ************************************ 00:07:42.791 13:07:23 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:43.049 * Looking for test storage... 00:07:43.049 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:07:43.049 13:07:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:43.049 13:07:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=611499 00:07:43.049 13:07:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 611499 00:07:43.049 13:07:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:43.049 13:07:23 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 611499 ']' 00:07:43.049 13:07:23 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.049 13:07:23 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.049 13:07:23 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.049 13:07:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.049 13:07:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:43.049 [2024-07-26 13:07:23.400198] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:43.049 [2024-07-26 13:07:23.400262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611499 ] 00:07:43.049 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.049 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:43.049 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:43.050 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:43.050 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:43.050 [2024-07-26 13:07:23.533455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.370 [2024-07-26 13:07:23.619251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.936 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.936 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:07:43.936 13:07:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:43.936 13:07:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:43.936 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.936 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:43.936 { 00:07:43.936 "filename": "/tmp/spdk_mem_dump.txt" 00:07:43.936 } 00:07:43.936 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.936 13:07:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:43.936 DPDK memory size 814.000000 MiB in 1 heap(s) 00:07:43.936 1 heaps totaling size 814.000000 MiB 00:07:43.936 size: 814.000000 MiB heap id: 0 00:07:43.936 end heaps---------- 00:07:43.937 8 mempools totaling size 598.116089 MiB 00:07:43.937 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:43.937 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:43.937 size: 84.521057 MiB name: bdev_io_611499 00:07:43.937 size: 51.011292 MiB name: evtpool_611499 00:07:43.937 size: 50.003479 MiB name: msgpool_611499 00:07:43.937 size: 21.763794 MiB name: PDU_Pool 00:07:43.937 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:43.937 size: 0.026123 MiB name: Session_Pool 00:07:43.937 end mempools------- 00:07:43.937 201 memzones totaling size 4.176453 MiB 00:07:43.937 size: 1.000366 MiB name: RG_ring_0_611499 00:07:43.937 size: 1.000366 MiB name: RG_ring_1_611499 00:07:43.937 size: 1.000366 MiB name: RG_ring_4_611499 00:07:43.937 size: 1.000366 MiB name: RG_ring_5_611499 00:07:43.937 size: 0.125366 MiB name: RG_ring_2_611499 00:07:43.937 size: 0.015991 MiB name: RG_ring_3_611499 00:07:43.937 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:07:43.937 size: 0.000305 MiB name: 0000:1a:01.0_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:01.1_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:01.2_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:01.3_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:01.4_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:01.5_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:01.6_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:01.7_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:02.0_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:02.1_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:02.2_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:02.3_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:02.4_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:02.5_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:02.6_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1a:02.7_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:01.0_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:01.1_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:01.2_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:01.3_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:01.4_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:01.5_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:01.6_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:01.7_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:02.0_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:02.1_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:02.2_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:02.3_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:02.4_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:02.5_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:02.6_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1c:02.7_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:01.0_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:01.1_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:01.2_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:01.3_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:01.4_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:01.5_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:01.6_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:01.7_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:02.0_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:02.1_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:02.2_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:02.3_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:02.4_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:02.5_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:02.6_qat 00:07:43.937 size: 0.000305 MiB name: 0000:1e:02.7_qat 00:07:43.937 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_0 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_1 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_0 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_2 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_3 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_1 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_4 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_5 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_2 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_6 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_7 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_3 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_8 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_9 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_4 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_10 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_11 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_5 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_12 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_13 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_6 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_14 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_15 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_7 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_16 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_17 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_8 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_18 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_19 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_9 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_20 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_21 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_10 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_22 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_23 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_11 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_24 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_25 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_12 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_26 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_27 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_13 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_28 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_29 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_14 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_30 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_31 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_15 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_32 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_33 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_16 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_34 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_35 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_17 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_36 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_37 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_18 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_38 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_39 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_19 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_40 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_41 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_20 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_42 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_43 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_21 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_44 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_45 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_22 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_46 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_47 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_23 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_48 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_49 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_24 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_50 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_51 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_25 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_52 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_53 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_26 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_54 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_55 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_27 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_56 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_57 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_28 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_58 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_59 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_29 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_60 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_61 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_30 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_62 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_63 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_31 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_64 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_65 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_32 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_66 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_67 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_33 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_68 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_69 00:07:43.937 size: 0.000122 MiB name: rte_compressdev_data_34 00:07:43.937 size: 0.000122 MiB name: rte_cryptodev_data_70 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_71 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_35 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_72 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_73 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_36 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_74 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_75 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_37 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_76 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_77 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_38 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_78 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_79 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_39 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_80 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_81 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_40 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_82 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_83 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_41 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_84 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_85 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_42 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_86 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_87 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_43 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_88 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_89 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_44 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_90 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_91 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_45 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_92 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_93 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_46 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_94 00:07:43.938 size: 0.000122 MiB name: rte_cryptodev_data_95 00:07:43.938 size: 0.000122 MiB name: rte_compressdev_data_47 00:07:43.938 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:07:43.938 end memzones------- 00:07:43.938 13:07:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:44.200 heap id: 0 total size: 814.000000 MiB number of busy elements: 635 number of free elements: 14 00:07:44.200 list of free elements. size: 11.781738 MiB 00:07:44.200 element at address: 0x200000400000 with size: 1.999512 MiB 00:07:44.200 element at address: 0x200018e00000 with size: 0.999878 MiB 00:07:44.200 element at address: 0x200019000000 with size: 0.999878 MiB 00:07:44.200 element at address: 0x200003e00000 with size: 0.996460 MiB 00:07:44.200 element at address: 0x200031c00000 with size: 0.994446 MiB 00:07:44.200 element at address: 0x200013800000 with size: 0.978699 MiB 00:07:44.200 element at address: 0x200007000000 with size: 0.959839 MiB 00:07:44.200 element at address: 0x200019200000 with size: 0.936584 MiB 00:07:44.200 element at address: 0x20001aa00000 with size: 0.564758 MiB 00:07:44.200 element at address: 0x200003a00000 with size: 0.494507 MiB 00:07:44.200 element at address: 0x20000b200000 with size: 0.489075 MiB 00:07:44.200 element at address: 0x200000800000 with size: 0.486694 MiB 00:07:44.200 element at address: 0x200019400000 with size: 0.485657 MiB 00:07:44.200 element at address: 0x200027e00000 with size: 0.395752 MiB 00:07:44.200 list of standard malloc elements. size: 199.898254 MiB 00:07:44.200 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:07:44.200 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:07:44.200 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:44.200 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:07:44.200 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:44.200 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:44.200 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:07:44.200 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:44.200 element at address: 0x20000032bc80 with size: 0.004395 MiB 00:07:44.200 element at address: 0x20000032f740 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000333200 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000336cc0 with size: 0.004395 MiB 00:07:44.200 element at address: 0x20000033a780 with size: 0.004395 MiB 00:07:44.200 element at address: 0x20000033e240 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000341d00 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003457c0 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000349280 with size: 0.004395 MiB 00:07:44.200 element at address: 0x20000034cd40 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000350800 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003542c0 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000357d80 with size: 0.004395 MiB 00:07:44.200 element at address: 0x20000035b840 with size: 0.004395 MiB 00:07:44.200 element at address: 0x20000035f300 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000362dc0 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000366880 with size: 0.004395 MiB 00:07:44.200 element at address: 0x20000036a340 with size: 0.004395 MiB 00:07:44.200 element at address: 0x20000036de00 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003718c0 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000375380 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000378e40 with size: 0.004395 MiB 00:07:44.200 element at address: 0x20000037c900 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003803c0 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000383e80 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000387940 with size: 0.004395 MiB 00:07:44.200 element at address: 0x20000038b400 with size: 0.004395 MiB 00:07:44.200 element at address: 0x20000038eec0 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000392980 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000396440 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000399f00 with size: 0.004395 MiB 00:07:44.200 element at address: 0x20000039d9c0 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003a1480 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003a4f40 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003a8a00 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003ac4c0 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003aff80 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003b3a40 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003b7500 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003bafc0 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003bea80 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003c2540 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003c6000 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003c9ac0 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003cd580 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003d1040 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003d4b00 with size: 0.004395 MiB 00:07:44.200 element at address: 0x2000003d8d00 with size: 0.004395 MiB 00:07:44.200 element at address: 0x200000329b80 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000032ac00 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000032d640 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000032e6c0 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000331100 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000332180 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000334bc0 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000335c40 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000338680 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000339700 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000033c140 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000033d1c0 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000033fc00 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000340c80 with size: 0.004028 MiB 00:07:44.200 element at address: 0x2000003436c0 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000344740 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000347180 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000348200 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000034ac40 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000034bcc0 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000034e700 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000034f780 with size: 0.004028 MiB 00:07:44.200 element at address: 0x2000003521c0 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000353240 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000355c80 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000356d00 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000359740 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000035a7c0 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000035d200 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000035e280 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000360cc0 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000361d40 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000364780 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000365800 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000368240 with size: 0.004028 MiB 00:07:44.200 element at address: 0x2000003692c0 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000036bd00 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000036cd80 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000036f7c0 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000370840 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000373280 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000374300 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000376d40 with size: 0.004028 MiB 00:07:44.200 element at address: 0x200000377dc0 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000037a800 with size: 0.004028 MiB 00:07:44.200 element at address: 0x20000037b880 with size: 0.004028 MiB 00:07:44.201 element at address: 0x20000037e2c0 with size: 0.004028 MiB 00:07:44.201 element at address: 0x20000037f340 with size: 0.004028 MiB 00:07:44.201 element at address: 0x200000381d80 with size: 0.004028 MiB 00:07:44.201 element at address: 0x200000382e00 with size: 0.004028 MiB 00:07:44.201 element at address: 0x200000385840 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003868c0 with size: 0.004028 MiB 00:07:44.201 element at address: 0x200000389300 with size: 0.004028 MiB 00:07:44.201 element at address: 0x20000038a380 with size: 0.004028 MiB 00:07:44.201 element at address: 0x20000038cdc0 with size: 0.004028 MiB 00:07:44.201 element at address: 0x20000038de40 with size: 0.004028 MiB 00:07:44.201 element at address: 0x200000390880 with size: 0.004028 MiB 00:07:44.201 element at address: 0x200000391900 with size: 0.004028 MiB 00:07:44.201 element at address: 0x200000394340 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003953c0 with size: 0.004028 MiB 00:07:44.201 element at address: 0x200000397e00 with size: 0.004028 MiB 00:07:44.201 element at address: 0x200000398e80 with size: 0.004028 MiB 00:07:44.201 element at address: 0x20000039b8c0 with size: 0.004028 MiB 00:07:44.201 element at address: 0x20000039c940 with size: 0.004028 MiB 00:07:44.201 element at address: 0x20000039f380 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003a0400 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003a2e40 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003a3ec0 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003a6900 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003a7980 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003aa3c0 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003ab440 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003ade80 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003aef00 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003b1940 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003b29c0 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003b5400 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003b6480 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003b8ec0 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003b9f40 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003bc980 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003bda00 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003c0440 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003c14c0 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003c3f00 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003c4f80 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003c79c0 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003c8a40 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003cb480 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003cc500 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003cef40 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003cffc0 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003d2a00 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003d3a80 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003d6c00 with size: 0.004028 MiB 00:07:44.201 element at address: 0x2000003d7c80 with size: 0.004028 MiB 00:07:44.201 element at address: 0x200000200000 with size: 0.000305 MiB 00:07:44.201 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:07:44.201 element at address: 0x200000200140 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200200 with size: 0.000183 MiB 00:07:44.201 element at address: 0x2000002002c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200380 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200440 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200500 with size: 0.000183 MiB 00:07:44.201 element at address: 0x2000002005c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200680 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200740 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200800 with size: 0.000183 MiB 00:07:44.201 element at address: 0x2000002008c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200980 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200a40 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200b00 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200bc0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200c80 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200d40 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200e00 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000200ec0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x2000002010c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000205380 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000225640 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000225700 with size: 0.000183 MiB 00:07:44.201 element at address: 0x2000002257c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000225880 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000225940 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000225a00 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000225ac0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000225b80 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000225c40 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000225d00 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000225dc0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000225e80 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000225f40 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226000 with size: 0.000183 MiB 00:07:44.201 element at address: 0x2000002260c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226180 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226240 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226300 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226500 with size: 0.000183 MiB 00:07:44.201 element at address: 0x2000002265c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226680 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226740 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226800 with size: 0.000183 MiB 00:07:44.201 element at address: 0x2000002268c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226980 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226a40 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226b00 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226bc0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226c80 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226d40 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226e00 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226ec0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000226f80 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000227040 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000227100 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000329300 with size: 0.000183 MiB 00:07:44.201 element at address: 0x2000003293c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000329580 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000329640 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000329800 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000032ce80 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000032d040 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000032d100 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000032d2c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000330940 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000330b00 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000330bc0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000330d80 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000334400 with size: 0.000183 MiB 00:07:44.201 element at address: 0x2000003345c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000334680 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000334840 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000337ec0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000338080 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000338140 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000338300 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000033b980 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000033bb40 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000033bc00 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000033bdc0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000033f440 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000033f600 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000033f6c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000033f880 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000342f00 with size: 0.000183 MiB 00:07:44.201 element at address: 0x2000003430c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000343180 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000343340 with size: 0.000183 MiB 00:07:44.201 element at address: 0x2000003469c0 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000346b80 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000346c40 with size: 0.000183 MiB 00:07:44.201 element at address: 0x200000346e00 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000034a480 with size: 0.000183 MiB 00:07:44.201 element at address: 0x20000034a640 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000034a700 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000034a8c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000034df40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000034e100 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000034e1c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000034e380 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000351a00 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000351bc0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000351c80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000351e40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003554c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000355680 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000355740 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000355900 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000358f80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000359140 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000359200 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003593c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000035ca40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000035cc00 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000035ccc0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000035ce80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000360500 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003606c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000360780 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000360940 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000363fc0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000364180 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000364240 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000364400 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000367a80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000367c40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000367d00 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000367ec0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000036b540 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000036b700 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000036b7c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000036b980 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000036f000 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000036f1c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000036f280 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000036f440 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000372ac0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000372c80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000372d40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000372f00 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000376580 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000376740 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000376800 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003769c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000037a040 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000037a200 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000037a2c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000037a480 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000037db00 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000037dcc0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000037dd80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000037df40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003815c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000381780 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000381840 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000381a00 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000385080 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000385240 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000385300 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003854c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000388b40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000388d00 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000388dc0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000388f80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000038c600 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000038c7c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000038c880 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000038ca40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003900c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000390280 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000390340 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000390500 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000393b80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000393d40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000393e00 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000393fc0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000397640 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000397800 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003978c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x200000397a80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000039b100 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000039b2c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000039b380 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000039b540 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000039ebc0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000039ed80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000039ee40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x20000039f000 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003a2680 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003a2840 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003a2900 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003a2ac0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003a6140 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003a6300 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003a63c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003a6580 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003a9c00 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003a9dc0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003a9e80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003aa040 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003ad6c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003ad880 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003ad940 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003adb00 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003b1180 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003b1340 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003b1400 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003b15c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003b4c40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003b4e00 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003b4ec0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003b5080 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003b8700 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003b88c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003b8980 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003b8b40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003bc1c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003bc380 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003bc440 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003bc600 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003bfc80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003bfe40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003bff00 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003c00c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003c3740 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003c3900 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003c39c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003c3b80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003c7200 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003c73c0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003c7480 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003c7640 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003cacc0 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003cae80 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003caf40 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003cb100 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003ce780 with size: 0.000183 MiB 00:07:44.202 element at address: 0x2000003ce940 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000003cea00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000003cebc0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000003d2240 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000003d2400 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000003d24c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000003d2680 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000003d5dc0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000003d64c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000003d6580 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000003d6880 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000087c980 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7e980 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7ea40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7eb00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7ebc0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7ec80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7ed40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7ee00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7eec0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7ef80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f040 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f100 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f1c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f280 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f340 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f400 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f4c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f580 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f640 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f700 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f7c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f880 with size: 0.000183 MiB 00:07:44.203 element at address: 0x200003a7f940 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa90940 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa90a00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa90ac0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa90b80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa90c40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa90d00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa90dc0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa90e80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa90f40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91000 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa910c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91180 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91240 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91300 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa913c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91480 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91540 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91600 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa916c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91780 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:07:44.203 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:07:44.204 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e65500 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:07:44.204 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:07:44.204 list of memzone associated elements. size: 602.320007 MiB 00:07:44.204 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:07:44.204 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:44.204 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:07:44.204 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:44.204 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:07:44.204 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_611499_0 00:07:44.204 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:07:44.204 associated memzone info: size: 48.002930 MiB name: MP_evtpool_611499_0 00:07:44.204 element at address: 0x200003fff380 with size: 48.003052 MiB 00:07:44.204 associated memzone info: size: 48.002930 MiB name: MP_msgpool_611499_0 00:07:44.204 element at address: 0x2000195be940 with size: 20.255554 MiB 00:07:44.204 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:44.204 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:07:44.204 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:44.204 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:07:44.204 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_611499 00:07:44.204 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:07:44.204 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_611499 00:07:44.204 element at address: 0x2000002271c0 with size: 1.008118 MiB 00:07:44.204 associated memzone info: size: 1.007996 MiB name: MP_evtpool_611499 00:07:44.204 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:07:44.204 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:44.204 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:07:44.204 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:44.204 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:07:44.204 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:44.204 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:07:44.204 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:44.204 element at address: 0x200003eff180 with size: 1.000488 MiB 00:07:44.204 associated memzone info: size: 1.000366 MiB name: RG_ring_0_611499 00:07:44.204 element at address: 0x200003affc00 with size: 1.000488 MiB 00:07:44.204 associated memzone info: size: 1.000366 MiB name: RG_ring_1_611499 00:07:44.204 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:07:44.204 associated memzone info: size: 1.000366 MiB name: RG_ring_4_611499 00:07:44.204 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:07:44.204 associated memzone info: size: 1.000366 MiB name: RG_ring_5_611499 00:07:44.204 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:07:44.204 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_611499 00:07:44.204 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:07:44.204 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:44.204 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:07:44.204 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:44.204 element at address: 0x20001947c540 with size: 0.250488 MiB 00:07:44.204 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:44.204 element at address: 0x200000205440 with size: 0.125488 MiB 00:07:44.204 associated memzone info: size: 0.125366 MiB name: RG_ring_2_611499 00:07:44.204 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:07:44.205 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:44.205 element at address: 0x200027e65680 with size: 0.023743 MiB 00:07:44.205 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:44.205 element at address: 0x200000201180 with size: 0.016113 MiB 00:07:44.205 associated memzone info: size: 0.015991 MiB name: RG_ring_3_611499 00:07:44.205 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:07:44.205 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:44.205 element at address: 0x2000003d5f80 with size: 0.001282 MiB 00:07:44.205 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:07:44.205 element at address: 0x2000003d6a40 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.0_qat 00:07:44.205 element at address: 0x2000003d2840 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.1_qat 00:07:44.205 element at address: 0x2000003ced80 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.2_qat 00:07:44.205 element at address: 0x2000003cb2c0 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.3_qat 00:07:44.205 element at address: 0x2000003c7800 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.4_qat 00:07:44.205 element at address: 0x2000003c3d40 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.5_qat 00:07:44.205 element at address: 0x2000003c0280 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.6_qat 00:07:44.205 element at address: 0x2000003bc7c0 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.7_qat 00:07:44.205 element at address: 0x2000003b8d00 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.0_qat 00:07:44.205 element at address: 0x2000003b5240 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.1_qat 00:07:44.205 element at address: 0x2000003b1780 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.2_qat 00:07:44.205 element at address: 0x2000003adcc0 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.3_qat 00:07:44.205 element at address: 0x2000003aa200 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.4_qat 00:07:44.205 element at address: 0x2000003a6740 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.5_qat 00:07:44.205 element at address: 0x2000003a2c80 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.6_qat 00:07:44.205 element at address: 0x20000039f1c0 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.7_qat 00:07:44.205 element at address: 0x20000039b700 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.0_qat 00:07:44.205 element at address: 0x200000397c40 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.1_qat 00:07:44.205 element at address: 0x200000394180 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.2_qat 00:07:44.205 element at address: 0x2000003906c0 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.3_qat 00:07:44.205 element at address: 0x20000038cc00 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.4_qat 00:07:44.205 element at address: 0x200000389140 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.5_qat 00:07:44.205 element at address: 0x200000385680 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.6_qat 00:07:44.205 element at address: 0x200000381bc0 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.7_qat 00:07:44.205 element at address: 0x20000037e100 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.0_qat 00:07:44.205 element at address: 0x20000037a640 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.1_qat 00:07:44.205 element at address: 0x200000376b80 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.2_qat 00:07:44.205 element at address: 0x2000003730c0 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.3_qat 00:07:44.205 element at address: 0x20000036f600 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.4_qat 00:07:44.205 element at address: 0x20000036bb40 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.5_qat 00:07:44.205 element at address: 0x200000368080 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.6_qat 00:07:44.205 element at address: 0x2000003645c0 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.7_qat 00:07:44.205 element at address: 0x200000360b00 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.0_qat 00:07:44.205 element at address: 0x20000035d040 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.1_qat 00:07:44.205 element at address: 0x200000359580 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.2_qat 00:07:44.205 element at address: 0x200000355ac0 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.3_qat 00:07:44.205 element at address: 0x200000352000 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.4_qat 00:07:44.205 element at address: 0x20000034e540 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.5_qat 00:07:44.205 element at address: 0x20000034aa80 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.6_qat 00:07:44.205 element at address: 0x200000346fc0 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.7_qat 00:07:44.205 element at address: 0x200000343500 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.0_qat 00:07:44.205 element at address: 0x20000033fa40 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.1_qat 00:07:44.205 element at address: 0x20000033bf80 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.2_qat 00:07:44.205 element at address: 0x2000003384c0 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.3_qat 00:07:44.205 element at address: 0x200000334a00 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.4_qat 00:07:44.205 element at address: 0x200000330f40 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.5_qat 00:07:44.205 element at address: 0x20000032d480 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.6_qat 00:07:44.205 element at address: 0x2000003299c0 with size: 0.000427 MiB 00:07:44.205 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.7_qat 00:07:44.205 element at address: 0x2000003d6740 with size: 0.000305 MiB 00:07:44.205 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:07:44.205 element at address: 0x2000002263c0 with size: 0.000305 MiB 00:07:44.205 associated memzone info: size: 0.000183 MiB name: MP_msgpool_611499 00:07:44.205 element at address: 0x200000200f80 with size: 0.000305 MiB 00:07:44.205 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_611499 00:07:44.205 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:07:44.205 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:44.205 element at address: 0x2000003d6940 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:07:44.206 element at address: 0x2000003d6640 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:07:44.206 element at address: 0x2000003d5e80 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:07:44.206 element at address: 0x2000003d2740 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:07:44.206 element at address: 0x2000003d2580 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:07:44.206 element at address: 0x2000003d2300 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:07:44.206 element at address: 0x2000003cec80 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:07:44.206 element at address: 0x2000003ceac0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:07:44.206 element at address: 0x2000003ce840 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:07:44.206 element at address: 0x2000003cb1c0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:07:44.206 element at address: 0x2000003cb000 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:07:44.206 element at address: 0x2000003cad80 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:07:44.206 element at address: 0x2000003c7700 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:07:44.206 element at address: 0x2000003c7540 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:07:44.206 element at address: 0x2000003c72c0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:07:44.206 element at address: 0x2000003c3c40 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:07:44.206 element at address: 0x2000003c3a80 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:07:44.206 element at address: 0x2000003c3800 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:07:44.206 element at address: 0x2000003c0180 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:07:44.206 element at address: 0x2000003bffc0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:07:44.206 element at address: 0x2000003bfd40 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:07:44.206 element at address: 0x2000003bc6c0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:07:44.206 element at address: 0x2000003bc500 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:07:44.206 element at address: 0x2000003bc280 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:07:44.206 element at address: 0x2000003b8c00 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:07:44.206 element at address: 0x2000003b8a40 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:07:44.206 element at address: 0x2000003b87c0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:07:44.206 element at address: 0x2000003b5140 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:07:44.206 element at address: 0x2000003b4f80 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:07:44.206 element at address: 0x2000003b4d00 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:07:44.206 element at address: 0x2000003b1680 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:07:44.206 element at address: 0x2000003b14c0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:07:44.206 element at address: 0x2000003b1240 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:07:44.206 element at address: 0x2000003adbc0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:07:44.206 element at address: 0x2000003ada00 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:07:44.206 element at address: 0x2000003ad780 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:07:44.206 element at address: 0x2000003aa100 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:07:44.206 element at address: 0x2000003a9f40 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:07:44.206 element at address: 0x2000003a9cc0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:07:44.206 element at address: 0x2000003a6640 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:07:44.206 element at address: 0x2000003a6480 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:07:44.206 element at address: 0x2000003a6200 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:07:44.206 element at address: 0x2000003a2b80 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:07:44.206 element at address: 0x2000003a29c0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:07:44.206 element at address: 0x2000003a2740 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:07:44.206 element at address: 0x20000039f0c0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:07:44.206 element at address: 0x20000039ef00 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:07:44.206 element at address: 0x20000039ec80 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:07:44.206 element at address: 0x20000039b600 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:07:44.206 element at address: 0x20000039b440 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:07:44.206 element at address: 0x20000039b1c0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:07:44.206 element at address: 0x200000397b40 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:07:44.206 element at address: 0x200000397980 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:07:44.206 element at address: 0x200000397700 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:07:44.206 element at address: 0x200000394080 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:07:44.206 element at address: 0x200000393ec0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:07:44.206 element at address: 0x200000393c40 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:07:44.206 element at address: 0x2000003905c0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:07:44.206 element at address: 0x200000390400 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:07:44.206 element at address: 0x200000390180 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:07:44.206 element at address: 0x20000038cb00 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:07:44.206 element at address: 0x20000038c940 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:07:44.206 element at address: 0x20000038c6c0 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:07:44.206 element at address: 0x200000389040 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:07:44.206 element at address: 0x200000388e80 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:07:44.206 element at address: 0x200000388c00 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:07:44.206 element at address: 0x200000385580 with size: 0.000244 MiB 00:07:44.206 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:07:44.207 element at address: 0x2000003853c0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:07:44.207 element at address: 0x200000385140 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:07:44.207 element at address: 0x200000381ac0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:07:44.207 element at address: 0x200000381900 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:07:44.207 element at address: 0x200000381680 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:07:44.207 element at address: 0x20000037e000 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:07:44.207 element at address: 0x20000037de40 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:07:44.207 element at address: 0x20000037dbc0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:07:44.207 element at address: 0x20000037a540 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:07:44.207 element at address: 0x20000037a380 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:07:44.207 element at address: 0x20000037a100 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:07:44.207 element at address: 0x200000376a80 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:07:44.207 element at address: 0x2000003768c0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:07:44.207 element at address: 0x200000376640 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:07:44.207 element at address: 0x200000372fc0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:07:44.207 element at address: 0x200000372e00 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:07:44.207 element at address: 0x200000372b80 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:07:44.207 element at address: 0x20000036f500 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:07:44.207 element at address: 0x20000036f340 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:07:44.207 element at address: 0x20000036f0c0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:07:44.207 element at address: 0x20000036ba40 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:07:44.207 element at address: 0x20000036b880 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:07:44.207 element at address: 0x20000036b600 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:07:44.207 element at address: 0x200000367f80 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:07:44.207 element at address: 0x200000367dc0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:07:44.207 element at address: 0x200000367b40 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:07:44.207 element at address: 0x2000003644c0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:07:44.207 element at address: 0x200000364300 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:07:44.207 element at address: 0x200000364080 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:07:44.207 element at address: 0x200000360a00 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:07:44.207 element at address: 0x200000360840 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:07:44.207 element at address: 0x2000003605c0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:07:44.207 element at address: 0x20000035cf40 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:07:44.207 element at address: 0x20000035cd80 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:07:44.207 element at address: 0x20000035cb00 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:07:44.207 element at address: 0x200000359480 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:07:44.207 element at address: 0x2000003592c0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:07:44.207 element at address: 0x200000359040 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:07:44.207 element at address: 0x2000003559c0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:07:44.207 element at address: 0x200000355800 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:07:44.207 element at address: 0x200000355580 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:07:44.207 element at address: 0x200000351f00 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:07:44.207 element at address: 0x200000351d40 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:07:44.207 element at address: 0x200000351ac0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:07:44.207 element at address: 0x20000034e440 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:07:44.207 element at address: 0x20000034e280 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:07:44.207 element at address: 0x20000034e000 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:07:44.207 element at address: 0x20000034a980 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:07:44.207 element at address: 0x20000034a7c0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:07:44.207 element at address: 0x20000034a540 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:07:44.207 element at address: 0x200000346ec0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:07:44.207 element at address: 0x200000346d00 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:07:44.207 element at address: 0x200000346a80 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:07:44.207 element at address: 0x200000343400 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:07:44.207 element at address: 0x200000343240 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:07:44.207 element at address: 0x200000342fc0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:07:44.207 element at address: 0x20000033f940 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:07:44.207 element at address: 0x20000033f780 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:07:44.207 element at address: 0x20000033f500 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:07:44.207 element at address: 0x20000033be80 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:07:44.207 element at address: 0x20000033bcc0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:07:44.207 element at address: 0x20000033ba40 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:07:44.207 element at address: 0x2000003383c0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:07:44.207 element at address: 0x200000338200 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:07:44.207 element at address: 0x200000337f80 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:07:44.207 element at address: 0x200000334900 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:07:44.207 element at address: 0x200000334740 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:07:44.207 element at address: 0x2000003344c0 with size: 0.000244 MiB 00:07:44.207 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:07:44.208 element at address: 0x200000330e40 with size: 0.000244 MiB 00:07:44.208 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:07:44.208 element at address: 0x200000330c80 with size: 0.000244 MiB 00:07:44.208 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:07:44.208 element at address: 0x200000330a00 with size: 0.000244 MiB 00:07:44.208 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:07:44.208 element at address: 0x20000032d380 with size: 0.000244 MiB 00:07:44.208 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:07:44.208 element at address: 0x20000032d1c0 with size: 0.000244 MiB 00:07:44.208 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:07:44.208 element at address: 0x20000032cf40 with size: 0.000244 MiB 00:07:44.208 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:07:44.208 element at address: 0x2000003298c0 with size: 0.000244 MiB 00:07:44.208 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:07:44.208 element at address: 0x200000329700 with size: 0.000244 MiB 00:07:44.208 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:07:44.208 element at address: 0x200000329480 with size: 0.000244 MiB 00:07:44.208 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:07:44.208 element at address: 0x2000003d5d00 with size: 0.000183 MiB 00:07:44.208 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:07:44.208 13:07:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:44.208 13:07:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 611499 00:07:44.208 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 611499 ']' 00:07:44.208 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 611499 00:07:44.208 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:07:44.208 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:44.208 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 611499 00:07:44.208 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:44.208 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:44.208 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 611499' 00:07:44.208 killing process with pid 611499 00:07:44.208 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 611499 00:07:44.208 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 611499 00:07:44.467 00:07:44.467 real 0m1.667s 00:07:44.467 user 0m1.805s 00:07:44.467 sys 0m0.545s 00:07:44.467 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.467 13:07:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:44.467 ************************************ 00:07:44.467 END TEST dpdk_mem_utility 00:07:44.467 ************************************ 00:07:44.467 13:07:24 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:07:44.467 13:07:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.467 13:07:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.467 13:07:24 -- common/autotest_common.sh@10 -- # set +x 00:07:44.467 ************************************ 00:07:44.467 START TEST event 00:07:44.467 ************************************ 00:07:44.467 13:07:24 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:07:44.726 * Looking for test storage... 00:07:44.726 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:07:44.726 13:07:25 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:44.726 13:07:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:44.726 13:07:25 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:44.726 13:07:25 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:07:44.726 13:07:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.726 13:07:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:44.726 ************************************ 00:07:44.726 START TEST event_perf 00:07:44.726 ************************************ 00:07:44.726 13:07:25 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:44.726 Running I/O for 1 seconds...[2024-07-26 13:07:25.132902] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:44.726 [2024-07-26 13:07:25.132958] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid611824 ] 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:44.726 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:44.726 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:44.726 [2024-07-26 13:07:25.244242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.984 [2024-07-26 13:07:25.333284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.984 [2024-07-26 13:07:25.333379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.984 [2024-07-26 13:07:25.333463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.984 [2024-07-26 13:07:25.333467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.918 Running I/O for 1 seconds... 00:07:45.918 lcore 0: 189203 00:07:45.918 lcore 1: 189201 00:07:45.918 lcore 2: 189202 00:07:45.918 lcore 3: 189203 00:07:45.918 done. 00:07:45.918 00:07:45.918 real 0m1.303s 00:07:45.918 user 0m4.174s 00:07:45.918 sys 0m0.121s 00:07:45.918 13:07:26 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.918 13:07:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:45.918 ************************************ 00:07:45.918 END TEST event_perf 00:07:45.918 ************************************ 00:07:46.176 13:07:26 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:46.176 13:07:26 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:46.176 13:07:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.176 13:07:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:46.176 ************************************ 00:07:46.176 START TEST event_reactor 00:07:46.176 ************************************ 00:07:46.176 13:07:26 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:46.176 [2024-07-26 13:07:26.502888] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:46.176 [2024-07-26 13:07:26.502941] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612114 ] 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:46.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:46.176 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:46.176 [2024-07-26 13:07:26.635260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.434 [2024-07-26 13:07:26.717335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.366 test_start 00:07:47.366 oneshot 00:07:47.366 tick 100 00:07:47.366 tick 100 00:07:47.366 tick 250 00:07:47.367 tick 100 00:07:47.367 tick 100 00:07:47.367 tick 100 00:07:47.367 tick 250 00:07:47.367 tick 500 00:07:47.367 tick 100 00:07:47.367 tick 100 00:07:47.367 tick 250 00:07:47.367 tick 100 00:07:47.367 tick 100 00:07:47.367 test_end 00:07:47.367 00:07:47.367 real 0m1.303s 00:07:47.367 user 0m1.159s 00:07:47.367 sys 0m0.138s 00:07:47.367 13:07:27 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.367 13:07:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:47.367 ************************************ 00:07:47.367 END TEST event_reactor 00:07:47.367 ************************************ 00:07:47.367 13:07:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:47.367 13:07:27 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:47.367 13:07:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.367 13:07:27 event -- common/autotest_common.sh@10 -- # set +x 00:07:47.367 ************************************ 00:07:47.367 START TEST event_reactor_perf 00:07:47.367 ************************************ 00:07:47.367 13:07:27 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:47.625 [2024-07-26 13:07:27.895387] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:47.625 [2024-07-26 13:07:27.895452] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612392 ] 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:47.625 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:47.625 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:47.625 [2024-07-26 13:07:28.028247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.625 [2024-07-26 13:07:28.109726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.999 test_start 00:07:48.999 test_end 00:07:48.999 Performance: 355880 events per second 00:07:48.999 00:07:48.999 real 0m1.315s 00:07:48.999 user 0m1.168s 00:07:48.999 sys 0m0.141s 00:07:48.999 13:07:29 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.999 13:07:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:48.999 ************************************ 00:07:48.999 END TEST event_reactor_perf 00:07:48.999 ************************************ 00:07:48.999 13:07:29 event -- event/event.sh@49 -- # uname -s 00:07:48.999 13:07:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:48.999 13:07:29 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:48.999 13:07:29 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:48.999 13:07:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.999 13:07:29 event -- common/autotest_common.sh@10 -- # set +x 00:07:48.999 ************************************ 00:07:48.999 START TEST event_scheduler 00:07:48.999 ************************************ 00:07:48.999 13:07:29 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:48.999 * Looking for test storage... 00:07:48.999 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:07:48.999 13:07:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:48.999 13:07:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=612704 00:07:48.999 13:07:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:48.999 13:07:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 612704 00:07:48.999 13:07:29 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 612704 ']' 00:07:48.999 13:07:29 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.999 13:07:29 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:48.999 13:07:29 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.999 13:07:29 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:48.999 13:07:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:48.999 13:07:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.999 [2024-07-26 13:07:29.409979] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:48.999 [2024-07-26 13:07:29.410043] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid612704 ] 00:07:48.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:49.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:49.000 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:49.000 [2024-07-26 13:07:29.514467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.258 [2024-07-26 13:07:29.594503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.258 [2024-07-26 13:07:29.594584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.258 [2024-07-26 13:07:29.594602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.258 [2024-07-26 13:07:29.594604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.824 13:07:30 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.824 13:07:30 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:07:49.824 13:07:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:49.824 13:07:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.824 13:07:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:49.824 [2024-07-26 13:07:30.313257] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:49.824 [2024-07-26 13:07:30.313282] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:49.824 [2024-07-26 13:07:30.313292] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:49.824 [2024-07-26 13:07:30.313300] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:49.824 [2024-07-26 13:07:30.313307] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:49.824 13:07:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.824 13:07:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:49.824 13:07:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.824 13:07:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:50.082 [2024-07-26 13:07:30.409907] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:50.082 13:07:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.082 13:07:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:50.082 13:07:30 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.082 13:07:30 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.082 13:07:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:50.082 ************************************ 00:07:50.082 START TEST scheduler_create_thread 00:07:50.082 ************************************ 00:07:50.082 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:07:50.082 13:07:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:50.082 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.082 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.082 2 00:07:50.082 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.082 13:07:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:50.082 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.082 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.082 3 00:07:50.082 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.082 13:07:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 4 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 5 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 6 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 7 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 8 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 9 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 10 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.083 13:07:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:50.648 13:07:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.648 13:07:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:50.648 13:07:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.648 13:07:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:52.018 13:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.018 13:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:52.018 13:07:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:52.018 13:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.018 13:07:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:53.390 13:07:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.390 00:07:53.390 real 0m3.103s 00:07:53.390 user 0m0.025s 00:07:53.390 sys 0m0.006s 00:07:53.390 13:07:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.390 13:07:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:53.390 ************************************ 00:07:53.390 END TEST scheduler_create_thread 00:07:53.390 ************************************ 00:07:53.390 13:07:33 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:53.390 13:07:33 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 612704 00:07:53.390 13:07:33 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 612704 ']' 00:07:53.390 13:07:33 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 612704 00:07:53.390 13:07:33 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:07:53.390 13:07:33 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.390 13:07:33 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 612704 00:07:53.390 13:07:33 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:53.390 13:07:33 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:53.390 13:07:33 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 612704' 00:07:53.390 killing process with pid 612704 00:07:53.390 13:07:33 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 612704 00:07:53.390 13:07:33 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 612704 00:07:53.648 [2024-07-26 13:07:33.937242] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:53.648 00:07:53.648 real 0m4.881s 00:07:53.648 user 0m9.510s 00:07:53.648 sys 0m0.490s 00:07:53.648 13:07:34 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.648 13:07:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:53.648 ************************************ 00:07:53.648 END TEST event_scheduler 00:07:53.648 ************************************ 00:07:53.906 13:07:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:53.906 13:07:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:53.906 13:07:34 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:53.906 13:07:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.906 13:07:34 event -- common/autotest_common.sh@10 -- # set +x 00:07:53.906 ************************************ 00:07:53.906 START TEST app_repeat 00:07:53.906 ************************************ 00:07:53.906 13:07:34 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:53.906 13:07:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.906 13:07:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:53.906 13:07:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:53.907 13:07:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:53.907 13:07:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:53.907 13:07:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:53.907 13:07:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:53.907 13:07:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=613544 00:07:53.907 13:07:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:53.907 13:07:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:53.907 13:07:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 613544' 00:07:53.907 Process app_repeat pid: 613544 00:07:53.907 13:07:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:53.907 13:07:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:53.907 spdk_app_start Round 0 00:07:53.907 13:07:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 613544 /var/tmp/spdk-nbd.sock 00:07:53.907 13:07:34 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 613544 ']' 00:07:53.907 13:07:34 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:53.907 13:07:34 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.907 13:07:34 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:53.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:53.907 13:07:34 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.907 13:07:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:53.907 [2024-07-26 13:07:34.269580] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:07:53.907 [2024-07-26 13:07:34.269638] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid613544 ] 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:53.907 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:53.907 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:53.907 [2024-07-26 13:07:34.403864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:54.165 [2024-07-26 13:07:34.487701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.165 [2024-07-26 13:07:34.487706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.730 13:07:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:54.730 13:07:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:54.730 13:07:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:54.988 Malloc0 00:07:54.988 13:07:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:55.247 Malloc1 00:07:55.247 13:07:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:55.247 13:07:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:55.505 /dev/nbd0 00:07:55.505 13:07:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:55.505 13:07:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:55.505 1+0 records in 00:07:55.505 1+0 records out 00:07:55.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228459 s, 17.9 MB/s 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:55.505 13:07:35 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:55.505 13:07:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:55.505 13:07:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:55.505 13:07:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:55.763 /dev/nbd1 00:07:55.763 13:07:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:55.763 13:07:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:55.763 1+0 records in 00:07:55.763 1+0 records out 00:07:55.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254247 s, 16.1 MB/s 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:55.763 13:07:36 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:55.763 13:07:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:55.763 13:07:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:55.763 13:07:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:55.763 13:07:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.763 13:07:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:56.021 { 00:07:56.021 "nbd_device": "/dev/nbd0", 00:07:56.021 "bdev_name": "Malloc0" 00:07:56.021 }, 00:07:56.021 { 00:07:56.021 "nbd_device": "/dev/nbd1", 00:07:56.021 "bdev_name": "Malloc1" 00:07:56.021 } 00:07:56.021 ]' 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:56.021 { 00:07:56.021 "nbd_device": "/dev/nbd0", 00:07:56.021 "bdev_name": "Malloc0" 00:07:56.021 }, 00:07:56.021 { 00:07:56.021 "nbd_device": "/dev/nbd1", 00:07:56.021 "bdev_name": "Malloc1" 00:07:56.021 } 00:07:56.021 ]' 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:56.021 /dev/nbd1' 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:56.021 /dev/nbd1' 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:56.021 256+0 records in 00:07:56.021 256+0 records out 00:07:56.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106977 s, 98.0 MB/s 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:56.021 256+0 records in 00:07:56.021 256+0 records out 00:07:56.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0190804 s, 55.0 MB/s 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:56.021 256+0 records in 00:07:56.021 256+0 records out 00:07:56.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017725 s, 59.2 MB/s 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.021 13:07:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:56.279 13:07:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:56.279 13:07:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:56.279 13:07:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:56.279 13:07:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.279 13:07:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.279 13:07:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:56.279 13:07:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:56.279 13:07:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.279 13:07:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.279 13:07:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:56.537 13:07:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:56.537 13:07:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:56.537 13:07:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:56.537 13:07:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.537 13:07:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.537 13:07:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:56.537 13:07:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:56.537 13:07:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.537 13:07:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:56.537 13:07:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.537 13:07:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.795 13:07:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:56.795 13:07:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:56.795 13:07:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:57.120 13:07:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:57.120 13:07:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:57.120 13:07:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:57.120 13:07:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:57.120 13:07:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:57.120 13:07:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:57.120 13:07:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:57.120 13:07:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:57.120 13:07:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:57.120 13:07:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:57.120 13:07:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:57.379 [2024-07-26 13:07:37.805805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:57.379 [2024-07-26 13:07:37.883750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.379 [2024-07-26 13:07:37.883754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.638 [2024-07-26 13:07:37.928629] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:57.638 [2024-07-26 13:07:37.928677] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:00.162 13:07:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:00.162 13:07:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:00.162 spdk_app_start Round 1 00:08:00.162 13:07:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 613544 /var/tmp/spdk-nbd.sock 00:08:00.162 13:07:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 613544 ']' 00:08:00.162 13:07:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:00.162 13:07:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.162 13:07:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:00.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:00.162 13:07:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.162 13:07:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:00.419 13:07:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.419 13:07:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:00.419 13:07:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:00.676 Malloc0 00:08:00.676 13:07:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:00.934 Malloc1 00:08:00.934 13:07:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:00.934 13:07:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:01.192 /dev/nbd0 00:08:01.192 13:07:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:01.192 13:07:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:01.192 1+0 records in 00:08:01.192 1+0 records out 00:08:01.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023417 s, 17.5 MB/s 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:01.192 13:07:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:01.192 13:07:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.192 13:07:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:01.192 13:07:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:01.450 /dev/nbd1 00:08:01.450 13:07:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:01.450 13:07:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:01.450 1+0 records in 00:08:01.450 1+0 records out 00:08:01.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025927 s, 15.8 MB/s 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:01.450 13:07:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:01.450 13:07:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.450 13:07:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:01.450 13:07:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:01.450 13:07:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.450 13:07:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:01.709 { 00:08:01.709 "nbd_device": "/dev/nbd0", 00:08:01.709 "bdev_name": "Malloc0" 00:08:01.709 }, 00:08:01.709 { 00:08:01.709 "nbd_device": "/dev/nbd1", 00:08:01.709 "bdev_name": "Malloc1" 00:08:01.709 } 00:08:01.709 ]' 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:01.709 { 00:08:01.709 "nbd_device": "/dev/nbd0", 00:08:01.709 "bdev_name": "Malloc0" 00:08:01.709 }, 00:08:01.709 { 00:08:01.709 "nbd_device": "/dev/nbd1", 00:08:01.709 "bdev_name": "Malloc1" 00:08:01.709 } 00:08:01.709 ]' 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:01.709 /dev/nbd1' 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:01.709 /dev/nbd1' 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:01.709 256+0 records in 00:08:01.709 256+0 records out 00:08:01.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103878 s, 101 MB/s 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:01.709 256+0 records in 00:08:01.709 256+0 records out 00:08:01.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171329 s, 61.2 MB/s 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:01.709 256+0 records in 00:08:01.709 256+0 records out 00:08:01.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263015 s, 39.9 MB/s 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.709 13:07:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.967 13:07:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.967 13:07:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.967 13:07:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.967 13:07:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.967 13:07:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.967 13:07:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.967 13:07:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:01.967 13:07:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.967 13:07:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.967 13:07:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:02.225 13:07:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:02.225 13:07:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:02.225 13:07:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:02.225 13:07:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.225 13:07:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.225 13:07:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:02.225 13:07:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:02.225 13:07:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.225 13:07:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.225 13:07:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.225 13:07:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.483 13:07:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:02.483 13:07:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:02.483 13:07:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.483 13:07:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:02.483 13:07:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:02.483 13:07:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.483 13:07:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:02.483 13:07:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:02.483 13:07:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:02.483 13:07:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:02.483 13:07:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:02.483 13:07:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:02.483 13:07:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:02.741 13:07:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:02.998 [2024-07-26 13:07:43.425801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:02.998 [2024-07-26 13:07:43.502167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.998 [2024-07-26 13:07:43.502173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.255 [2024-07-26 13:07:43.547794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:03.255 [2024-07-26 13:07:43.547841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:05.783 13:07:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:05.783 13:07:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:05.783 spdk_app_start Round 2 00:08:05.783 13:07:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 613544 /var/tmp/spdk-nbd.sock 00:08:05.783 13:07:46 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 613544 ']' 00:08:05.783 13:07:46 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:05.783 13:07:46 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.783 13:07:46 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:05.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:05.783 13:07:46 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.783 13:07:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:06.041 13:07:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.041 13:07:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:06.041 13:07:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:06.298 Malloc0 00:08:06.298 13:07:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:06.556 Malloc1 00:08:06.556 13:07:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:06.556 13:07:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:06.814 /dev/nbd0 00:08:06.814 13:07:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:06.814 13:07:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:06.814 1+0 records in 00:08:06.814 1+0 records out 00:08:06.814 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000128816 s, 31.8 MB/s 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.814 13:07:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:06.814 13:07:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.814 13:07:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:06.814 13:07:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:07.071 /dev/nbd1 00:08:07.071 13:07:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:07.071 13:07:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:07.071 13:07:47 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:07.071 13:07:47 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:07.071 13:07:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:07.071 13:07:47 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:07.071 13:07:47 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:07.071 13:07:47 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:07.071 13:07:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:07.071 13:07:47 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:07.071 13:07:47 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:07.071 1+0 records in 00:08:07.071 1+0 records out 00:08:07.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276592 s, 14.8 MB/s 00:08:07.071 13:07:47 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:07.071 13:07:47 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:07.072 13:07:47 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:07.072 13:07:47 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:07.072 13:07:47 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:07.072 13:07:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:07.072 13:07:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:07.072 13:07:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:07.072 13:07:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.072 13:07:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:07.330 { 00:08:07.330 "nbd_device": "/dev/nbd0", 00:08:07.330 "bdev_name": "Malloc0" 00:08:07.330 }, 00:08:07.330 { 00:08:07.330 "nbd_device": "/dev/nbd1", 00:08:07.330 "bdev_name": "Malloc1" 00:08:07.330 } 00:08:07.330 ]' 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:07.330 { 00:08:07.330 "nbd_device": "/dev/nbd0", 00:08:07.330 "bdev_name": "Malloc0" 00:08:07.330 }, 00:08:07.330 { 00:08:07.330 "nbd_device": "/dev/nbd1", 00:08:07.330 "bdev_name": "Malloc1" 00:08:07.330 } 00:08:07.330 ]' 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:07.330 /dev/nbd1' 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:07.330 /dev/nbd1' 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:07.330 256+0 records in 00:08:07.330 256+0 records out 00:08:07.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109441 s, 95.8 MB/s 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:07.330 256+0 records in 00:08:07.330 256+0 records out 00:08:07.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275248 s, 38.1 MB/s 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:07.330 256+0 records in 00:08:07.330 256+0 records out 00:08:07.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183296 s, 57.2 MB/s 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.330 13:07:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.588 13:07:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.588 13:07:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.588 13:07:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.588 13:07:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.588 13:07:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.588 13:07:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.588 13:07:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:07.588 13:07:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.588 13:07:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.588 13:07:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:07.846 13:07:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:07.846 13:07:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:07.846 13:07:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:07.846 13:07:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.846 13:07:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.846 13:07:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:07.846 13:07:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:07.846 13:07:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.846 13:07:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:07.846 13:07:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.846 13:07:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.103 13:07:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.103 13:07:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:08.103 13:07:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.103 13:07:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:08.103 13:07:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:08.103 13:07:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.103 13:07:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:08.103 13:07:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:08.103 13:07:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:08.103 13:07:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:08.104 13:07:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:08.104 13:07:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:08.104 13:07:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:08.361 13:07:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:08.620 [2024-07-26 13:07:49.072521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.878 [2024-07-26 13:07:49.149652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.878 [2024-07-26 13:07:49.149657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.878 [2024-07-26 13:07:49.194149] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:08.878 [2024-07-26 13:07:49.194197] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:11.404 13:07:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 613544 /var/tmp/spdk-nbd.sock 00:08:11.404 13:07:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 613544 ']' 00:08:11.404 13:07:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:11.404 13:07:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.404 13:07:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:11.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:11.404 13:07:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.404 13:07:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:11.662 13:07:52 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.662 13:07:52 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:11.662 13:07:52 event.app_repeat -- event/event.sh@39 -- # killprocess 613544 00:08:11.662 13:07:52 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 613544 ']' 00:08:11.662 13:07:52 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 613544 00:08:11.662 13:07:52 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:11.662 13:07:52 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.662 13:07:52 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 613544 00:08:11.662 13:07:52 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.662 13:07:52 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.662 13:07:52 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 613544' 00:08:11.662 killing process with pid 613544 00:08:11.662 13:07:52 event.app_repeat -- common/autotest_common.sh@969 -- # kill 613544 00:08:11.662 13:07:52 event.app_repeat -- common/autotest_common.sh@974 -- # wait 613544 00:08:11.920 spdk_app_start is called in Round 0. 00:08:11.920 Shutdown signal received, stop current app iteration 00:08:11.920 Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 reinitialization... 00:08:11.920 spdk_app_start is called in Round 1. 00:08:11.920 Shutdown signal received, stop current app iteration 00:08:11.920 Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 reinitialization... 00:08:11.920 spdk_app_start is called in Round 2. 00:08:11.920 Shutdown signal received, stop current app iteration 00:08:11.920 Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 reinitialization... 00:08:11.920 spdk_app_start is called in Round 3. 00:08:11.920 Shutdown signal received, stop current app iteration 00:08:11.921 13:07:52 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:11.921 13:07:52 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:11.921 00:08:11.921 real 0m18.085s 00:08:11.921 user 0m39.062s 00:08:11.921 sys 0m3.636s 00:08:11.921 13:07:52 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.921 13:07:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:11.921 ************************************ 00:08:11.921 END TEST app_repeat 00:08:11.921 ************************************ 00:08:11.921 13:07:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:11.921 00:08:11.921 real 0m27.389s 00:08:11.921 user 0m55.257s 00:08:11.921 sys 0m4.881s 00:08:11.921 13:07:52 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.921 13:07:52 event -- common/autotest_common.sh@10 -- # set +x 00:08:11.921 ************************************ 00:08:11.921 END TEST event 00:08:11.921 ************************************ 00:08:11.921 13:07:52 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:08:11.921 13:07:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.921 13:07:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.921 13:07:52 -- common/autotest_common.sh@10 -- # set +x 00:08:11.921 ************************************ 00:08:11.921 START TEST thread 00:08:11.921 ************************************ 00:08:11.921 13:07:52 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:08:12.179 * Looking for test storage... 00:08:12.179 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:08:12.179 13:07:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:12.179 13:07:52 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:12.179 13:07:52 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.179 13:07:52 thread -- common/autotest_common.sh@10 -- # set +x 00:08:12.179 ************************************ 00:08:12.179 START TEST thread_poller_perf 00:08:12.179 ************************************ 00:08:12.179 13:07:52 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:12.179 [2024-07-26 13:07:52.619417] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:12.179 [2024-07-26 13:07:52.619543] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid616947 ] 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:12.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.437 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:12.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.438 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:12.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.438 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:12.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.438 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:12.438 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:12.438 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:12.438 [2024-07-26 13:07:52.826623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.438 [2024-07-26 13:07:52.911548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.438 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:13.844 ====================================== 00:08:13.844 busy:2514285546 (cyc) 00:08:13.844 total_run_count: 288000 00:08:13.844 tsc_hz: 2500000000 (cyc) 00:08:13.844 ====================================== 00:08:13.844 poller_cost: 8730 (cyc), 3492 (nsec) 00:08:13.844 00:08:13.844 real 0m1.413s 00:08:13.844 user 0m1.207s 00:08:13.844 sys 0m0.198s 00:08:13.844 13:07:53 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.844 13:07:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:13.844 ************************************ 00:08:13.844 END TEST thread_poller_perf 00:08:13.844 ************************************ 00:08:13.844 13:07:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:13.844 13:07:54 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:13.844 13:07:54 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.844 13:07:54 thread -- common/autotest_common.sh@10 -- # set +x 00:08:13.844 ************************************ 00:08:13.844 START TEST thread_poller_perf 00:08:13.844 ************************************ 00:08:13.844 13:07:54 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:13.844 [2024-07-26 13:07:54.102175] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:13.844 [2024-07-26 13:07:54.102232] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617236 ] 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:13.844 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.844 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:13.844 [2024-07-26 13:07:54.232989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.844 [2024-07-26 13:07:54.314781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.844 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:15.219 ====================================== 00:08:15.219 busy:2502557700 (cyc) 00:08:15.219 total_run_count: 3796000 00:08:15.219 tsc_hz: 2500000000 (cyc) 00:08:15.219 ====================================== 00:08:15.219 poller_cost: 659 (cyc), 263 (nsec) 00:08:15.219 00:08:15.219 real 0m1.318s 00:08:15.219 user 0m1.178s 00:08:15.219 sys 0m0.133s 00:08:15.219 13:07:55 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.219 13:07:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:15.219 ************************************ 00:08:15.219 END TEST thread_poller_perf 00:08:15.219 ************************************ 00:08:15.219 13:07:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:15.219 00:08:15.219 real 0m2.995s 00:08:15.219 user 0m2.492s 00:08:15.219 sys 0m0.510s 00:08:15.219 13:07:55 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.219 13:07:55 thread -- common/autotest_common.sh@10 -- # set +x 00:08:15.219 ************************************ 00:08:15.219 END TEST thread 00:08:15.219 ************************************ 00:08:15.219 13:07:55 -- spdk/autotest.sh@184 -- # [[ 1 -eq 1 ]] 00:08:15.219 13:07:55 -- spdk/autotest.sh@185 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:08:15.219 13:07:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:15.219 13:07:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.219 13:07:55 -- common/autotest_common.sh@10 -- # set +x 00:08:15.219 ************************************ 00:08:15.219 START TEST accel 00:08:15.219 ************************************ 00:08:15.219 13:07:55 accel -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:08:15.219 * Looking for test storage... 00:08:15.219 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:08:15.219 13:07:55 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:15.219 13:07:55 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:15.219 13:07:55 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:15.219 13:07:55 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=617556 00:08:15.219 13:07:55 accel -- accel/accel.sh@63 -- # waitforlisten 617556 00:08:15.219 13:07:55 accel -- common/autotest_common.sh@831 -- # '[' -z 617556 ']' 00:08:15.219 13:07:55 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.219 13:07:55 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.219 13:07:55 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:15.219 13:07:55 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.219 13:07:55 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:15.219 13:07:55 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.219 13:07:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:15.219 13:07:55 accel -- common/autotest_common.sh@10 -- # set +x 00:08:15.219 13:07:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:15.219 13:07:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:15.219 13:07:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:15.219 13:07:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:15.219 13:07:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:15.219 13:07:55 accel -- accel/accel.sh@41 -- # jq -r . 00:08:15.219 [2024-07-26 13:07:55.677750] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:15.219 [2024-07-26 13:07:55.677810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617556 ] 00:08:15.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.477 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:15.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.477 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:15.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.477 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:15.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.477 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:15.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.477 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:15.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.477 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:15.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.477 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:15.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.477 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:15.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.477 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:15.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.477 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:15.477 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.477 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:15.478 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.478 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:15.478 [2024-07-26 13:07:55.812298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.478 [2024-07-26 13:07:55.895116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.044 13:07:56 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:16.044 13:07:56 accel -- common/autotest_common.sh@864 -- # return 0 00:08:16.044 13:07:56 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:16.044 13:07:56 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:16.044 13:07:56 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:16.044 13:07:56 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:16.044 13:07:56 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:16.044 13:07:56 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:16.044 13:07:56 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:16.044 13:07:56 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.044 13:07:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.303 13:07:56 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # IFS== 00:08:16.303 13:07:56 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:16.303 13:07:56 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:16.303 13:07:56 accel -- accel/accel.sh@75 -- # killprocess 617556 00:08:16.303 13:07:56 accel -- common/autotest_common.sh@950 -- # '[' -z 617556 ']' 00:08:16.303 13:07:56 accel -- common/autotest_common.sh@954 -- # kill -0 617556 00:08:16.303 13:07:56 accel -- common/autotest_common.sh@955 -- # uname 00:08:16.303 13:07:56 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:16.303 13:07:56 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 617556 00:08:16.303 13:07:56 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:16.303 13:07:56 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:16.303 13:07:56 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 617556' 00:08:16.303 killing process with pid 617556 00:08:16.303 13:07:56 accel -- common/autotest_common.sh@969 -- # kill 617556 00:08:16.303 13:07:56 accel -- common/autotest_common.sh@974 -- # wait 617556 00:08:16.561 13:07:57 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:16.561 13:07:57 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:16.562 13:07:57 accel -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:16.562 13:07:57 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.562 13:07:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.562 13:07:57 accel.accel_help -- common/autotest_common.sh@1125 -- # accel_perf -h 00:08:16.562 13:07:57 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:16.562 13:07:57 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:16.562 13:07:57 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.562 13:07:57 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.562 13:07:57 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.562 13:07:57 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.562 13:07:57 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.562 13:07:57 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:16.562 13:07:57 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:16.562 13:07:57 accel.accel_help -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.562 13:07:57 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:16.820 13:07:57 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:16.820 13:07:57 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:16.820 13:07:57 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.820 13:07:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:16.820 ************************************ 00:08:16.820 START TEST accel_missing_filename 00:08:16.820 ************************************ 00:08:16.820 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress 00:08:16.820 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # local es=0 00:08:16.820 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:16.820 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:16.820 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.820 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:16.820 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.820 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:08:16.820 13:07:57 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:16.820 13:07:57 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:16.820 13:07:57 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.820 13:07:57 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.820 13:07:57 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.820 13:07:57 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.820 13:07:57 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.820 13:07:57 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:16.820 13:07:57 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:16.820 [2024-07-26 13:07:57.196248] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:16.820 [2024-07-26 13:07:57.196320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617864 ] 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:16.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.820 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:16.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.821 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:16.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.821 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:16.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.821 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:16.821 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:16.821 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:16.821 [2024-07-26 13:07:57.314949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.079 [2024-07-26 13:07:57.396984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.079 [2024-07-26 13:07:57.459521] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.079 [2024-07-26 13:07:57.522159] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:17.079 A filename is required. 00:08:17.079 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # es=234 00:08:17.079 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.079 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=106 00:08:17.079 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@663 -- # case "$es" in 00:08:17.079 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@670 -- # es=1 00:08:17.079 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.079 00:08:17.079 real 0m0.444s 00:08:17.079 user 0m0.300s 00:08:17.079 sys 0m0.172s 00:08:17.079 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.079 13:07:57 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:17.079 ************************************ 00:08:17.079 END TEST accel_missing_filename 00:08:17.079 ************************************ 00:08:17.337 13:07:57 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:17.337 13:07:57 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:17.337 13:07:57 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.337 13:07:57 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.337 ************************************ 00:08:17.337 START TEST accel_compress_verify 00:08:17.337 ************************************ 00:08:17.337 13:07:57 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:17.337 13:07:57 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # local es=0 00:08:17.337 13:07:57 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:17.337 13:07:57 accel.accel_compress_verify -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:17.337 13:07:57 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.337 13:07:57 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:17.337 13:07:57 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.337 13:07:57 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:17.337 13:07:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:17.337 13:07:57 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:17.337 13:07:57 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.337 13:07:57 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.337 13:07:57 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.337 13:07:57 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.337 13:07:57 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.337 13:07:57 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:17.337 13:07:57 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:17.337 [2024-07-26 13:07:57.703838] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:17.337 [2024-07-26 13:07:57.703892] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid617896 ] 00:08:17.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.337 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:17.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.337 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:17.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.337 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:17.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.337 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:17.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.337 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:17.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.337 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:17.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.337 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:17.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.337 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:17.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.337 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:17.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:17.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.338 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:17.338 [2024-07-26 13:07:57.822554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.596 [2024-07-26 13:07:57.905671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.596 [2024-07-26 13:07:57.972908] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.596 [2024-07-26 13:07:58.038680] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:17.596 00:08:17.596 Compression does not support the verify option, aborting. 00:08:17.596 13:07:58 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # es=161 00:08:17.596 13:07:58 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.596 13:07:58 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=33 00:08:17.596 13:07:58 accel.accel_compress_verify -- common/autotest_common.sh@663 -- # case "$es" in 00:08:17.596 13:07:58 accel.accel_compress_verify -- common/autotest_common.sh@670 -- # es=1 00:08:17.596 13:07:58 accel.accel_compress_verify -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.596 00:08:17.596 real 0m0.435s 00:08:17.596 user 0m0.292s 00:08:17.596 sys 0m0.166s 00:08:17.596 13:07:58 accel.accel_compress_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.596 13:07:58 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:17.596 ************************************ 00:08:17.596 END TEST accel_compress_verify 00:08:17.596 ************************************ 00:08:17.855 13:07:58 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:17.855 13:07:58 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:17.855 13:07:58 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.855 13:07:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.855 ************************************ 00:08:17.855 START TEST accel_wrong_workload 00:08:17.855 ************************************ 00:08:17.855 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w foobar 00:08:17.855 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # local es=0 00:08:17.855 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:17.855 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:17.855 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.855 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:17.855 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.855 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:08:17.855 13:07:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:17.855 13:07:58 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:17.855 13:07:58 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.855 13:07:58 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.855 13:07:58 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.855 13:07:58 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.855 13:07:58 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.855 13:07:58 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:17.855 13:07:58 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:17.855 Unsupported workload type: foobar 00:08:17.855 [2024-07-26 13:07:58.237551] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:17.855 accel_perf options: 00:08:17.855 [-h help message] 00:08:17.855 [-q queue depth per core] 00:08:17.855 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:17.855 [-T number of threads per core 00:08:17.855 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:17.855 [-t time in seconds] 00:08:17.856 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:17.856 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:17.856 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:17.856 [-l for compress/decompress workloads, name of uncompressed input file 00:08:17.856 [-S for crc32c workload, use this seed value (default 0) 00:08:17.856 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:17.856 [-f for fill workload, use this BYTE value (default 255) 00:08:17.856 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:17.856 [-y verify result if this switch is on] 00:08:17.856 [-a tasks to allocate per core (default: same value as -q)] 00:08:17.856 Can be used to spread operations across a wider range of memory. 00:08:17.856 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # es=1 00:08:17.856 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.856 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:17.856 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.856 00:08:17.856 real 0m0.043s 00:08:17.856 user 0m0.023s 00:08:17.856 sys 0m0.020s 00:08:17.856 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.856 13:07:58 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:17.856 ************************************ 00:08:17.856 END TEST accel_wrong_workload 00:08:17.856 ************************************ 00:08:17.856 Error: writing output failed: Broken pipe 00:08:17.856 13:07:58 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:17.856 13:07:58 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:17.856 13:07:58 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.856 13:07:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.856 ************************************ 00:08:17.856 START TEST accel_negative_buffers 00:08:17.856 ************************************ 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # local es=0 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:08:17.856 13:07:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:17.856 13:07:58 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:17.856 13:07:58 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.856 13:07:58 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.856 13:07:58 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.856 13:07:58 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.856 13:07:58 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.856 13:07:58 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:17.856 13:07:58 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:17.856 -x option must be non-negative. 00:08:17.856 [2024-07-26 13:07:58.358828] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:17.856 accel_perf options: 00:08:17.856 [-h help message] 00:08:17.856 [-q queue depth per core] 00:08:17.856 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:17.856 [-T number of threads per core 00:08:17.856 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:17.856 [-t time in seconds] 00:08:17.856 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:17.856 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:17.856 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:17.856 [-l for compress/decompress workloads, name of uncompressed input file 00:08:17.856 [-S for crc32c workload, use this seed value (default 0) 00:08:17.856 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:17.856 [-f for fill workload, use this BYTE value (default 255) 00:08:17.856 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:17.856 [-y verify result if this switch is on] 00:08:17.856 [-a tasks to allocate per core (default: same value as -q)] 00:08:17.856 Can be used to spread operations across a wider range of memory. 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # es=1 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:17.856 00:08:17.856 real 0m0.043s 00:08:17.856 user 0m0.021s 00:08:17.856 sys 0m0.022s 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.856 13:07:58 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:17.856 ************************************ 00:08:17.856 END TEST accel_negative_buffers 00:08:17.856 ************************************ 00:08:17.856 Error: writing output failed: Broken pipe 00:08:18.115 13:07:58 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:18.115 13:07:58 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:18.115 13:07:58 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.115 13:07:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:18.115 ************************************ 00:08:18.115 START TEST accel_crc32c 00:08:18.115 ************************************ 00:08:18.115 13:07:58 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:18.115 13:07:58 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:18.115 [2024-07-26 13:07:58.481133] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:18.115 [2024-07-26 13:07:58.481200] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618199 ] 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.115 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:18.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:18.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:18.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:18.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:18.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:18.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:18.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:18.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:18.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:18.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:18.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:18.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:18.116 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:18.116 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:18.116 [2024-07-26 13:07:58.614659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.390 [2024-07-26 13:07:58.695989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:18.390 13:07:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:19.765 13:07:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.765 00:08:19.765 real 0m1.460s 00:08:19.765 user 0m1.284s 00:08:19.765 sys 0m0.182s 00:08:19.765 13:07:59 accel.accel_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.765 13:07:59 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:19.765 ************************************ 00:08:19.765 END TEST accel_crc32c 00:08:19.765 ************************************ 00:08:19.765 13:07:59 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:19.765 13:07:59 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:19.765 13:07:59 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.765 13:07:59 accel -- common/autotest_common.sh@10 -- # set +x 00:08:19.765 ************************************ 00:08:19.765 START TEST accel_crc32c_C2 00:08:19.765 ************************************ 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:19.765 13:07:59 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:19.765 [2024-07-26 13:08:00.024122] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:19.765 [2024-07-26 13:08:00.024189] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618489 ] 00:08:19.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.765 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:19.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.765 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:19.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.765 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:19.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.765 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:19.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.765 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:19.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.765 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:19.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.765 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:19.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.765 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:19.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.765 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:19.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.765 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:19.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.765 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:19.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.765 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:19.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:19.766 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:19.766 [2024-07-26 13:08:00.157450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.766 [2024-07-26 13:08:00.240160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.024 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.025 13:08:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.959 00:08:20.959 real 0m1.460s 00:08:20.959 user 0m1.279s 00:08:20.959 sys 0m0.186s 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.959 13:08:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:20.959 ************************************ 00:08:20.959 END TEST accel_crc32c_C2 00:08:20.959 ************************************ 00:08:21.218 13:08:01 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:21.218 13:08:01 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:21.218 13:08:01 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.218 13:08:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.218 ************************************ 00:08:21.218 START TEST accel_copy 00:08:21.218 ************************************ 00:08:21.218 13:08:01 accel.accel_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy -y 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:21.218 13:08:01 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:21.218 [2024-07-26 13:08:01.569385] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:21.218 [2024-07-26 13:08:01.569441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid618766 ] 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:21.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.218 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:21.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:21.219 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:21.219 [2024-07-26 13:08:01.698937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.477 [2024-07-26 13:08:01.781309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.477 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:21.478 13:08:01 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:22.861 13:08:02 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.861 00:08:22.862 real 0m1.462s 00:08:22.862 user 0m1.276s 00:08:22.862 sys 0m0.185s 00:08:22.862 13:08:02 accel.accel_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.862 13:08:02 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:22.862 ************************************ 00:08:22.862 END TEST accel_copy 00:08:22.862 ************************************ 00:08:22.862 13:08:03 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:22.862 13:08:03 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:22.862 13:08:03 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.862 13:08:03 accel -- common/autotest_common.sh@10 -- # set +x 00:08:22.862 ************************************ 00:08:22.862 START TEST accel_fill 00:08:22.862 ************************************ 00:08:22.862 13:08:03 accel.accel_fill -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:22.862 13:08:03 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:22.862 [2024-07-26 13:08:03.109843] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:22.863 [2024-07-26 13:08:03.109898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619054 ] 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.863 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:22.863 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.864 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:22.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.864 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:22.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.864 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:22.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.864 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:22.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.864 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:22.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.864 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:22.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.864 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:22.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.864 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:22.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.864 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:22.864 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:22.864 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:22.864 [2024-07-26 13:08:03.240048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.864 [2024-07-26 13:08:03.322151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.127 13:08:03 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:24.061 13:08:04 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.061 00:08:24.061 real 0m1.457s 00:08:24.061 user 0m1.282s 00:08:24.061 sys 0m0.177s 00:08:24.061 13:08:04 accel.accel_fill -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.061 13:08:04 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:24.061 ************************************ 00:08:24.061 END TEST accel_fill 00:08:24.061 ************************************ 00:08:24.061 13:08:04 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:24.061 13:08:04 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:24.061 13:08:04 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.061 13:08:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:24.319 ************************************ 00:08:24.319 START TEST accel_copy_crc32c 00:08:24.319 ************************************ 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:24.319 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:24.319 [2024-07-26 13:08:04.652414] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:24.319 [2024-07-26 13:08:04.652469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619331 ] 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:24.319 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.319 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:24.320 [2024-07-26 13:08:04.782334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.578 [2024-07-26 13:08:04.865787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.578 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:24.579 13:08:04 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.952 00:08:25.952 real 0m1.455s 00:08:25.952 user 0m1.271s 00:08:25.952 sys 0m0.192s 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.952 13:08:06 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:25.952 ************************************ 00:08:25.952 END TEST accel_copy_crc32c 00:08:25.952 ************************************ 00:08:25.952 13:08:06 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:25.953 13:08:06 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:25.953 13:08:06 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.953 13:08:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:25.953 ************************************ 00:08:25.953 START TEST accel_copy_crc32c_C2 00:08:25.953 ************************************ 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:25.953 [2024-07-26 13:08:06.192365] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:25.953 [2024-07-26 13:08:06.192424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619620 ] 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:25.953 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:25.953 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:25.953 [2024-07-26 13:08:06.327951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.953 [2024-07-26 13:08:06.410488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:25.953 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:26.211 13:08:06 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:27.145 00:08:27.145 real 0m1.464s 00:08:27.145 user 0m1.271s 00:08:27.145 sys 0m0.198s 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.145 13:08:07 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:27.145 ************************************ 00:08:27.145 END TEST accel_copy_crc32c_C2 00:08:27.145 ************************************ 00:08:27.145 13:08:07 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:27.145 13:08:07 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:27.145 13:08:07 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.145 13:08:07 accel -- common/autotest_common.sh@10 -- # set +x 00:08:27.437 ************************************ 00:08:27.437 START TEST accel_dualcast 00:08:27.437 ************************************ 00:08:27.437 13:08:07 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dualcast -y 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:27.437 13:08:07 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:27.437 [2024-07-26 13:08:07.737777] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:27.437 [2024-07-26 13:08:07.737831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid619897 ] 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:27.437 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:27.437 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:27.437 [2024-07-26 13:08:07.866747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.437 [2024-07-26 13:08:07.950303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:27.696 13:08:08 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:29.071 13:08:09 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:29.071 00:08:29.071 real 0m1.458s 00:08:29.071 user 0m1.280s 00:08:29.071 sys 0m0.179s 00:08:29.071 13:08:09 accel.accel_dualcast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.071 13:08:09 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:29.071 ************************************ 00:08:29.071 END TEST accel_dualcast 00:08:29.071 ************************************ 00:08:29.071 13:08:09 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:29.071 13:08:09 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:29.072 13:08:09 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.072 13:08:09 accel -- common/autotest_common.sh@10 -- # set +x 00:08:29.072 ************************************ 00:08:29.072 START TEST accel_compare 00:08:29.072 ************************************ 00:08:29.072 13:08:09 accel.accel_compare -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compare -y 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:29.072 [2024-07-26 13:08:09.282301] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:29.072 [2024-07-26 13:08:09.282359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620187 ] 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:29.072 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:29.072 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:29.072 [2024-07-26 13:08:09.413245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.072 [2024-07-26 13:08:09.493303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.072 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:29.073 13:08:09 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:29.073 13:08:09 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:29.073 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:29.073 13:08:09 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:30.448 13:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:30.449 13:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:30.449 13:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:30.449 13:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:30.449 13:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:30.449 13:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:30.449 13:08:10 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:08:30.449 13:08:10 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:08:30.449 13:08:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:30.449 13:08:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:30.449 13:08:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.449 13:08:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:30.449 13:08:10 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.449 00:08:30.449 real 0m1.457s 00:08:30.449 user 0m1.273s 00:08:30.449 sys 0m0.190s 00:08:30.449 13:08:10 accel.accel_compare -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.449 13:08:10 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:08:30.449 ************************************ 00:08:30.449 END TEST accel_compare 00:08:30.449 ************************************ 00:08:30.449 13:08:10 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:30.449 13:08:10 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:30.449 13:08:10 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.449 13:08:10 accel -- common/autotest_common.sh@10 -- # set +x 00:08:30.449 ************************************ 00:08:30.449 START TEST accel_xor 00:08:30.449 ************************************ 00:08:30.449 13:08:10 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:30.449 13:08:10 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:30.449 [2024-07-26 13:08:10.821102] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:30.449 [2024-07-26 13:08:10.821166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620465 ] 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:30.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:30.449 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:30.449 [2024-07-26 13:08:10.955623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.708 [2024-07-26 13:08:11.038703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:30.708 13:08:11 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.084 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:32.085 00:08:32.085 real 0m1.463s 00:08:32.085 user 0m1.282s 00:08:32.085 sys 0m0.186s 00:08:32.085 13:08:12 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.085 13:08:12 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:32.085 ************************************ 00:08:32.085 END TEST accel_xor 00:08:32.085 ************************************ 00:08:32.085 13:08:12 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:32.085 13:08:12 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:32.085 13:08:12 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.085 13:08:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:32.085 ************************************ 00:08:32.085 START TEST accel_xor 00:08:32.085 ************************************ 00:08:32.085 13:08:12 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y -x 3 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:08:32.085 13:08:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:08:32.085 [2024-07-26 13:08:12.367535] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:32.085 [2024-07-26 13:08:12.367594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid620750 ] 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:32.085 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:32.085 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:32.085 [2024-07-26 13:08:12.502566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.085 [2024-07-26 13:08:12.585996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:32.344 13:08:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:33.299 13:08:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:33.299 00:08:33.299 real 0m1.456s 00:08:33.299 user 0m1.270s 00:08:33.299 sys 0m0.190s 00:08:33.299 13:08:13 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.299 13:08:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:08:33.299 ************************************ 00:08:33.299 END TEST accel_xor 00:08:33.299 ************************************ 00:08:33.558 13:08:13 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:33.558 13:08:13 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:33.558 13:08:13 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.558 13:08:13 accel -- common/autotest_common.sh@10 -- # set +x 00:08:33.558 ************************************ 00:08:33.558 START TEST accel_dif_verify 00:08:33.558 ************************************ 00:08:33.558 13:08:13 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_verify 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:33.558 13:08:13 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:08:33.558 [2024-07-26 13:08:13.910460] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:33.558 [2024-07-26 13:08:13.910518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621030 ] 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.558 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:33.558 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.559 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:33.559 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:33.559 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:33.559 [2024-07-26 13:08:14.041198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.818 [2024-07-26 13:08:14.123543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:33.818 13:08:14 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.194 13:08:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:35.195 13:08:15 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:35.195 00:08:35.195 real 0m1.466s 00:08:35.195 user 0m1.281s 00:08:35.195 sys 0m0.187s 00:08:35.195 13:08:15 accel.accel_dif_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.195 13:08:15 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:08:35.195 ************************************ 00:08:35.195 END TEST accel_dif_verify 00:08:35.195 ************************************ 00:08:35.195 13:08:15 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:35.195 13:08:15 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:35.195 13:08:15 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.195 13:08:15 accel -- common/autotest_common.sh@10 -- # set +x 00:08:35.195 ************************************ 00:08:35.195 START TEST accel_dif_generate 00:08:35.195 ************************************ 00:08:35.195 13:08:15 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:08:35.195 13:08:15 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:08:35.195 [2024-07-26 13:08:15.458273] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:35.195 [2024-07-26 13:08:15.458331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621315 ] 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:35.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:35.195 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:35.195 [2024-07-26 13:08:15.590838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.195 [2024-07-26 13:08:15.673294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:35.454 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:35.455 13:08:15 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:36.390 13:08:16 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.390 00:08:36.390 real 0m1.458s 00:08:36.390 user 0m1.272s 00:08:36.390 sys 0m0.190s 00:08:36.390 13:08:16 accel.accel_dif_generate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.390 13:08:16 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:08:36.390 ************************************ 00:08:36.390 END TEST accel_dif_generate 00:08:36.390 ************************************ 00:08:36.649 13:08:16 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:36.649 13:08:16 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:36.649 13:08:16 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.649 13:08:16 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.649 ************************************ 00:08:36.649 START TEST accel_dif_generate_copy 00:08:36.649 ************************************ 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate_copy 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:36.649 13:08:16 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:08:36.649 [2024-07-26 13:08:17.001846] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:36.649 [2024-07-26 13:08:17.001902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621595 ] 00:08:36.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.649 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:36.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.649 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:36.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.649 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:36.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:36.650 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:36.650 [2024-07-26 13:08:17.132237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.909 [2024-07-26 13:08:17.214302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:36.909 13:08:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:38.286 00:08:38.286 real 0m1.456s 00:08:38.286 user 0m1.273s 00:08:38.286 sys 0m0.190s 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.286 13:08:18 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:38.286 ************************************ 00:08:38.286 END TEST accel_dif_generate_copy 00:08:38.286 ************************************ 00:08:38.286 13:08:18 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:38.286 13:08:18 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:38.286 13:08:18 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:38.286 13:08:18 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.286 13:08:18 accel -- common/autotest_common.sh@10 -- # set +x 00:08:38.286 ************************************ 00:08:38.286 START TEST accel_comp 00:08:38.286 ************************************ 00:08:38.286 13:08:18 accel.accel_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:38.286 13:08:18 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:38.286 [2024-07-26 13:08:18.542207] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:38.286 [2024-07-26 13:08:18.542269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid621880 ] 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.286 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:38.286 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:38.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:38.287 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:38.287 [2024-07-26 13:08:18.676280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.287 [2024-07-26 13:08:18.758254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.546 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:38.547 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.547 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.547 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.547 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:38.547 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.547 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.547 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:38.547 13:08:18 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:38.547 13:08:18 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:38.547 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:38.547 13:08:18 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:39.482 13:08:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.482 00:08:39.482 real 0m1.467s 00:08:39.482 user 0m1.284s 00:08:39.482 sys 0m0.186s 00:08:39.482 13:08:19 accel.accel_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.482 13:08:19 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:39.482 ************************************ 00:08:39.482 END TEST accel_comp 00:08:39.482 ************************************ 00:08:39.741 13:08:20 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:39.741 13:08:20 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:39.741 13:08:20 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.741 13:08:20 accel -- common/autotest_common.sh@10 -- # set +x 00:08:39.741 ************************************ 00:08:39.741 START TEST accel_decomp 00:08:39.741 ************************************ 00:08:39.741 13:08:20 accel.accel_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:39.741 13:08:20 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:39.741 [2024-07-26 13:08:20.090878] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:39.741 [2024-07-26 13:08:20.090948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622163 ] 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.741 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:39.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.742 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:39.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.742 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:39.742 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:39.742 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:39.742 [2024-07-26 13:08:20.223046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.000 [2024-07-26 13:08:20.305190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.000 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:40.001 13:08:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:41.377 13:08:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:41.378 13:08:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:41.378 00:08:41.378 real 0m1.468s 00:08:41.378 user 0m1.283s 00:08:41.378 sys 0m0.188s 00:08:41.378 13:08:21 accel.accel_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.378 13:08:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:41.378 ************************************ 00:08:41.378 END TEST accel_decomp 00:08:41.378 ************************************ 00:08:41.378 13:08:21 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:41.378 13:08:21 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:41.378 13:08:21 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.378 13:08:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:41.378 ************************************ 00:08:41.378 START TEST accel_decomp_full 00:08:41.378 ************************************ 00:08:41.378 13:08:21 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:41.378 13:08:21 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:41.378 [2024-07-26 13:08:21.644104] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:41.378 [2024-07-26 13:08:21.644166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622449 ] 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:41.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:41.378 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:41.378 [2024-07-26 13:08:21.772594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.378 [2024-07-26 13:08:21.855950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.638 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:41.639 13:08:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:42.588 13:08:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:42.588 00:08:42.588 real 0m1.478s 00:08:42.588 user 0m1.302s 00:08:42.588 sys 0m0.177s 00:08:42.588 13:08:23 accel.accel_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.588 13:08:23 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:42.588 ************************************ 00:08:42.588 END TEST accel_decomp_full 00:08:42.588 ************************************ 00:08:42.917 13:08:23 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:42.917 13:08:23 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:42.917 13:08:23 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.917 13:08:23 accel -- common/autotest_common.sh@10 -- # set +x 00:08:42.917 ************************************ 00:08:42.917 START TEST accel_decomp_mcore 00:08:42.917 ************************************ 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:42.917 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:42.917 [2024-07-26 13:08:23.202296] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:42.917 [2024-07-26 13:08:23.202350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622727 ] 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.917 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:42.917 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.918 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:42.918 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:42.918 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:42.918 [2024-07-26 13:08:23.335314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.918 [2024-07-26 13:08:23.420324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.918 [2024-07-26 13:08:23.420417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.918 [2024-07-26 13:08:23.420478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.918 [2024-07-26 13:08:23.420482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.176 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:43.177 13:08:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:44.112 00:08:44.112 real 0m1.463s 00:08:44.112 user 0m4.649s 00:08:44.112 sys 0m0.190s 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.112 13:08:24 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:44.112 ************************************ 00:08:44.112 END TEST accel_decomp_mcore 00:08:44.112 ************************************ 00:08:44.372 13:08:24 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:44.372 13:08:24 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:44.372 13:08:24 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.372 13:08:24 accel -- common/autotest_common.sh@10 -- # set +x 00:08:44.372 ************************************ 00:08:44.372 START TEST accel_decomp_full_mcore 00:08:44.372 ************************************ 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:44.372 13:08:24 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:44.372 [2024-07-26 13:08:24.750480] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:44.372 [2024-07-26 13:08:24.750537] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623020 ] 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:44.372 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:44.372 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:44.372 [2024-07-26 13:08:24.882698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.631 [2024-07-26 13:08:24.970521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.631 [2024-07-26 13:08:24.970615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.631 [2024-07-26 13:08:24.970699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.631 [2024-07-26 13:08:24.970703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.631 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:44.632 13:08:25 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:46.008 00:08:46.008 real 0m1.483s 00:08:46.008 user 0m4.714s 00:08:46.008 sys 0m0.195s 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.008 13:08:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:46.008 ************************************ 00:08:46.008 END TEST accel_decomp_full_mcore 00:08:46.008 ************************************ 00:08:46.008 13:08:26 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:46.008 13:08:26 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:46.008 13:08:26 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.008 13:08:26 accel -- common/autotest_common.sh@10 -- # set +x 00:08:46.008 ************************************ 00:08:46.008 START TEST accel_decomp_mthread 00:08:46.008 ************************************ 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:46.008 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:46.009 [2024-07-26 13:08:26.316090] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:46.009 [2024-07-26 13:08:26.316153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623302 ] 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:46.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:46.009 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:46.009 [2024-07-26 13:08:26.450338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.009 [2024-07-26 13:08:26.530316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.268 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:46.268 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.268 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.268 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.268 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:46.268 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.268 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.268 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.268 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:46.269 13:08:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:47.644 00:08:47.644 real 0m1.465s 00:08:47.644 user 0m1.289s 00:08:47.644 sys 0m0.181s 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.644 13:08:27 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:47.644 ************************************ 00:08:47.644 END TEST accel_decomp_mthread 00:08:47.644 ************************************ 00:08:47.644 13:08:27 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:47.644 13:08:27 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:47.644 13:08:27 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.644 13:08:27 accel -- common/autotest_common.sh@10 -- # set +x 00:08:47.644 ************************************ 00:08:47.644 START TEST accel_decomp_full_mthread 00:08:47.644 ************************************ 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:47.644 13:08:27 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:47.644 [2024-07-26 13:08:27.865220] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:47.644 [2024-07-26 13:08:27.865281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623588 ] 00:08:47.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.644 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:47.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.644 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:47.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.644 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:47.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.644 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:47.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.644 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:47.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.644 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:47.644 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:47.645 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.645 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:47.645 [2024-07-26 13:08:28.000046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.645 [2024-07-26 13:08:28.083587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:47.645 13:08:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:49.020 00:08:49.020 real 0m1.496s 00:08:49.020 user 0m0.010s 00:08:49.020 sys 0m0.003s 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.020 13:08:29 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:49.020 ************************************ 00:08:49.020 END TEST accel_decomp_full_mthread 00:08:49.020 ************************************ 00:08:49.020 13:08:29 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:08:49.020 13:08:29 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:08:49.020 13:08:29 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:08:49.020 13:08:29 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:49.020 13:08:29 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=623864 00:08:49.020 13:08:29 accel -- accel/accel.sh@63 -- # waitforlisten 623864 00:08:49.020 13:08:29 accel -- common/autotest_common.sh@831 -- # '[' -z 623864 ']' 00:08:49.020 13:08:29 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.020 13:08:29 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.020 13:08:29 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.020 13:08:29 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.020 13:08:29 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.020 13:08:29 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:49.020 13:08:29 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:49.020 13:08:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:49.020 13:08:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:49.021 13:08:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.021 13:08:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.021 13:08:29 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:49.021 13:08:29 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:49.021 13:08:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:49.021 13:08:29 accel -- accel/accel.sh@41 -- # jq -r . 00:08:49.021 [2024-07-26 13:08:29.425746] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:49.021 [2024-07-26 13:08:29.425809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid623864 ] 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:49.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.021 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:49.280 [2024-07-26 13:08:29.556511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.280 [2024-07-26 13:08:29.639003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.845 [2024-07-26 13:08:30.333060] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:50.103 13:08:30 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.103 13:08:30 accel -- common/autotest_common.sh@864 -- # return 0 00:08:50.103 13:08:30 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:50.103 13:08:30 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:50.103 13:08:30 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:50.103 13:08:30 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:08:50.103 13:08:30 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:08:50.103 13:08:30 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:08:50.103 13:08:30 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.103 13:08:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:50.103 13:08:30 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:08:50.103 13:08:30 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.362 "method": "compressdev_scan_accel_module", 00:08:50.362 13:08:30 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:50.362 13:08:30 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:50.362 13:08:30 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@10 -- # set +x 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # IFS== 00:08:50.362 13:08:30 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:50.362 13:08:30 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:50.362 13:08:30 accel -- accel/accel.sh@75 -- # killprocess 623864 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@950 -- # '[' -z 623864 ']' 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@954 -- # kill -0 623864 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@955 -- # uname 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 623864 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 623864' 00:08:50.362 killing process with pid 623864 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@969 -- # kill 623864 00:08:50.362 13:08:30 accel -- common/autotest_common.sh@974 -- # wait 623864 00:08:50.929 13:08:31 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:50.929 13:08:31 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:50.929 13:08:31 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:50.929 13:08:31 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.929 13:08:31 accel -- common/autotest_common.sh@10 -- # set +x 00:08:50.929 ************************************ 00:08:50.929 START TEST accel_cdev_comp 00:08:50.929 ************************************ 00:08:50.929 13:08:31 accel.accel_cdev_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:50.929 13:08:31 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:08:50.929 [2024-07-26 13:08:31.259519] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:50.929 [2024-07-26 13:08:31.259575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624151 ] 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:50.929 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:50.929 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:50.929 [2024-07-26 13:08:31.390435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.188 [2024-07-26 13:08:31.474452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.755 [2024-07-26 13:08:32.179680] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:51.755 [2024-07-26 13:08:32.182052] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xaa9fe0 PMD being used: compress_qat 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.755 [2024-07-26 13:08:32.185850] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xcaed30 PMD being used: compress_qat 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:51.755 13:08:32 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:53.131 13:08:33 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:53.131 00:08:53.131 real 0m2.107s 00:08:53.131 user 0m0.009s 00:08:53.131 sys 0m0.003s 00:08:53.131 13:08:33 accel.accel_cdev_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.131 13:08:33 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:08:53.131 ************************************ 00:08:53.131 END TEST accel_cdev_comp 00:08:53.131 ************************************ 00:08:53.131 13:08:33 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:53.131 13:08:33 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:53.131 13:08:33 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.131 13:08:33 accel -- common/autotest_common.sh@10 -- # set +x 00:08:53.131 ************************************ 00:08:53.131 START TEST accel_cdev_decomp 00:08:53.131 ************************************ 00:08:53.131 13:08:33 accel.accel_cdev_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:53.131 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:53.131 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:53.132 13:08:33 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:53.132 [2024-07-26 13:08:33.427190] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:53.132 [2024-07-26 13:08:33.427244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624443 ] 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:53.132 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:53.132 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:53.132 [2024-07-26 13:08:33.559951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.132 [2024-07-26 13:08:33.642969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.068 [2024-07-26 13:08:34.340512] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:54.068 [2024-07-26 13:08:34.342887] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x849fe0 PMD being used: compress_qat 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.068 [2024-07-26 13:08:34.346775] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xa4ed30 PMD being used: compress_qat 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.068 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:54.069 13:08:34 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:55.004 00:08:55.004 real 0m2.091s 00:08:55.004 user 0m0.011s 00:08:55.004 sys 0m0.001s 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.004 13:08:35 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:55.004 ************************************ 00:08:55.004 END TEST accel_cdev_decomp 00:08:55.004 ************************************ 00:08:55.263 13:08:35 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:55.263 13:08:35 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:55.263 13:08:35 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.263 13:08:35 accel -- common/autotest_common.sh@10 -- # set +x 00:08:55.263 ************************************ 00:08:55.263 START TEST accel_cdev_decomp_full 00:08:55.263 ************************************ 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:55.263 13:08:35 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:55.263 [2024-07-26 13:08:35.605646] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:55.263 [2024-07-26 13:08:35.605701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid624947 ] 00:08:55.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.263 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:55.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.263 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:55.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.263 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:55.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.263 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:55.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.263 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:55.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.263 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:55.263 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.263 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:55.264 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.264 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:55.264 [2024-07-26 13:08:35.737479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.523 [2024-07-26 13:08:35.821300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.090 [2024-07-26 13:08:36.507931] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:56.090 [2024-07-26 13:08:36.510310] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1dbafe0 PMD being used: compress_qat 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.090 [2024-07-26 13:08:36.513370] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1dbe2b0 PMD being used: compress_qat 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.090 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:56.091 13:08:36 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:57.469 00:08:57.469 real 0m2.090s 00:08:57.469 user 0m0.012s 00:08:57.469 sys 0m0.000s 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.469 13:08:37 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:57.469 ************************************ 00:08:57.469 END TEST accel_cdev_decomp_full 00:08:57.469 ************************************ 00:08:57.469 13:08:37 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:57.469 13:08:37 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:08:57.469 13:08:37 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.469 13:08:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:57.469 ************************************ 00:08:57.469 START TEST accel_cdev_decomp_mcore 00:08:57.469 ************************************ 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:57.469 13:08:37 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:57.469 [2024-07-26 13:08:37.768149] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:57.469 [2024-07-26 13:08:37.768203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625271 ] 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.469 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:57.469 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:57.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.470 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:57.470 [2024-07-26 13:08:37.899196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.470 [2024-07-26 13:08:37.986398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.470 [2024-07-26 13:08:37.986490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.470 [2024-07-26 13:08:37.986573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.470 [2024-07-26 13:08:37.986577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.460 [2024-07-26 13:08:38.663719] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:08:58.460 [2024-07-26 13:08:38.666094] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x17e4600 PMD being used: compress_qat 00:08:58.460 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.460 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.460 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.460 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.460 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.460 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.460 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.460 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.460 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 [2024-07-26 13:08:38.671282] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f33a019b8b0 PMD being used: compress_qat 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:58.461 [2024-07-26 13:08:38.672109] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f339819b8b0 PMD being used: compress_qat 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.461 [2024-07-26 13:08:38.672937] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x17e9890 PMD being used: compress_qat 00:08:58.461 [2024-07-26 13:08:38.673107] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f339019b8b0 PMD being used: compress_qat 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:58.461 13:08:38 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:08:59.399 00:08:59.399 real 0m2.105s 00:08:59.399 user 0m6.853s 00:08:59.399 sys 0m0.547s 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.399 13:08:39 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:59.399 ************************************ 00:08:59.399 END TEST accel_cdev_decomp_mcore 00:08:59.399 ************************************ 00:08:59.399 13:08:39 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:59.399 13:08:39 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:59.399 13:08:39 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.399 13:08:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:59.659 ************************************ 00:08:59.659 START TEST accel_cdev_decomp_full_mcore 00:08:59.659 ************************************ 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:59.659 13:08:39 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:59.659 [2024-07-26 13:08:39.961362] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:08:59.659 [2024-07-26 13:08:39.961423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid625618 ] 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:59.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.659 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:59.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.660 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:59.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.660 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:59.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.660 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:59.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:59.660 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:59.660 [2024-07-26 13:08:40.098746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.919 [2024-07-26 13:08:40.186848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.919 [2024-07-26 13:08:40.186942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.919 [2024-07-26 13:08:40.187024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.919 [2024-07-26 13:08:40.187028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.488 [2024-07-26 13:08:40.873267] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:09:00.488 [2024-07-26 13:08:40.875636] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x153f600 PMD being used: compress_qat 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.488 [2024-07-26 13:08:40.879920] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fe23c19b8b0 PMD being used: compress_qat 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:00.488 [2024-07-26 13:08:40.880782] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fe23419b8b0 PMD being used: compress_qat 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.488 [2024-07-26 13:08:40.881614] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x153f6a0 PMD being used: compress_qat 00:09:00.488 [2024-07-26 13:08:40.881800] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fe22c19b8b0 PMD being used: compress_qat 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.488 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:00.489 13:08:40 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:09:01.869 00:09:01.869 real 0m2.124s 00:09:01.869 user 0m6.901s 00:09:01.869 sys 0m0.531s 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.869 13:08:42 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:01.869 ************************************ 00:09:01.869 END TEST accel_cdev_decomp_full_mcore 00:09:01.869 ************************************ 00:09:01.869 13:08:42 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:01.869 13:08:42 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:01.869 13:08:42 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.869 13:08:42 accel -- common/autotest_common.sh@10 -- # set +x 00:09:01.869 ************************************ 00:09:01.869 START TEST accel_cdev_decomp_mthread 00:09:01.869 ************************************ 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:01.869 13:08:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:01.869 [2024-07-26 13:08:42.169008] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:09:01.869 [2024-07-26 13:08:42.169065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626103 ] 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:01.869 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.869 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:01.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:01.870 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:01.870 [2024-07-26 13:08:42.299773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.870 [2024-07-26 13:08:42.381791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.808 [2024-07-26 13:08:43.059063] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:09:02.808 [2024-07-26 13:08:43.061447] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1a97fe0 PMD being used: compress_qat 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.808 [2024-07-26 13:08:43.066043] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1a9d180 PMD being used: compress_qat 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.808 [2024-07-26 13:08:43.068303] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1bbfb20 PMD being used: compress_qat 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.808 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:02.809 13:08:43 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:03.748 13:08:44 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:09:03.748 00:09:03.748 real 0m2.091s 00:09:03.748 user 0m1.569s 00:09:03.748 sys 0m0.524s 00:09:03.749 13:08:44 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.749 13:08:44 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:03.749 ************************************ 00:09:03.749 END TEST accel_cdev_decomp_mthread 00:09:03.749 ************************************ 00:09:03.749 13:08:44 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:03.749 13:08:44 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:03.749 13:08:44 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.749 13:08:44 accel -- common/autotest_common.sh@10 -- # set +x 00:09:04.009 ************************************ 00:09:04.009 START TEST accel_cdev_decomp_full_mthread 00:09:04.009 ************************************ 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:04.009 13:08:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:04.009 [2024-07-26 13:08:44.345110] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:09:04.009 [2024-07-26 13:08:44.345181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626392 ] 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:04.009 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.009 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:04.010 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:04.010 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:04.010 [2024-07-26 13:08:44.478851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.270 [2024-07-26 13:08:44.563613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.840 [2024-07-26 13:08:45.250483] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:09:04.840 [2024-07-26 13:08:45.252836] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1de8fe0 PMD being used: compress_qat 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.840 [2024-07-26 13:08:45.256627] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1de9080 PMD being used: compress_qat 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 [2024-07-26 13:08:45.259067] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1fedc10 PMD being used: compress_qat 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:04.840 13:08:45 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.222 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:09:06.223 00:09:06.223 real 0m2.106s 00:09:06.223 user 0m1.589s 00:09:06.223 sys 0m0.518s 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.223 13:08:46 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:09:06.223 ************************************ 00:09:06.223 END TEST accel_cdev_decomp_full_mthread 00:09:06.223 ************************************ 00:09:06.223 13:08:46 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:09:06.223 13:08:46 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:06.223 13:08:46 accel -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:06.223 13:08:46 accel -- accel/accel.sh@137 -- # build_accel_config 00:09:06.223 13:08:46 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.223 13:08:46 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:06.223 13:08:46 accel -- common/autotest_common.sh@10 -- # set +x 00:09:06.223 13:08:46 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:06.223 13:08:46 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:06.223 13:08:46 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:06.223 13:08:46 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:06.223 13:08:46 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:06.223 13:08:46 accel -- accel/accel.sh@41 -- # jq -r . 00:09:06.223 ************************************ 00:09:06.223 START TEST accel_dif_functional_tests 00:09:06.223 ************************************ 00:09:06.223 13:08:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:06.223 [2024-07-26 13:08:46.546493] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:09:06.223 [2024-07-26 13:08:46.546548] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626907 ] 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:06.223 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.223 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:06.223 [2024-07-26 13:08:46.676355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.483 [2024-07-26 13:08:46.761574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.483 [2024-07-26 13:08:46.761657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.483 [2024-07-26 13:08:46.761668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.483 00:09:06.483 00:09:06.483 CUnit - A unit testing framework for C - Version 2.1-3 00:09:06.483 http://cunit.sourceforge.net/ 00:09:06.483 00:09:06.483 00:09:06.483 Suite: accel_dif 00:09:06.483 Test: verify: DIF generated, GUARD check ...passed 00:09:06.483 Test: verify: DIF generated, APPTAG check ...passed 00:09:06.483 Test: verify: DIF generated, REFTAG check ...passed 00:09:06.483 Test: verify: DIF not generated, GUARD check ...[2024-07-26 13:08:46.845743] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:06.483 passed 00:09:06.483 Test: verify: DIF not generated, APPTAG check ...[2024-07-26 13:08:46.845808] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:06.483 passed 00:09:06.483 Test: verify: DIF not generated, REFTAG check ...[2024-07-26 13:08:46.845838] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:06.483 passed 00:09:06.483 Test: verify: APPTAG correct, APPTAG check ...passed 00:09:06.483 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-26 13:08:46.845903] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:06.483 passed 00:09:06.483 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:06.483 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:06.483 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:06.483 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-26 13:08:46.846048] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:06.483 passed 00:09:06.483 Test: verify copy: DIF generated, GUARD check ...passed 00:09:06.483 Test: verify copy: DIF generated, APPTAG check ...passed 00:09:06.483 Test: verify copy: DIF generated, REFTAG check ...passed 00:09:06.483 Test: verify copy: DIF not generated, GUARD check ...[2024-07-26 13:08:46.846211] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:06.483 passed 00:09:06.483 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-26 13:08:46.846244] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:06.483 passed 00:09:06.483 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-26 13:08:46.846274] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:06.483 passed 00:09:06.483 Test: generate copy: DIF generated, GUARD check ...passed 00:09:06.483 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:06.483 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:06.483 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:06.483 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:06.483 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:06.483 Test: generate copy: iovecs-len validate ...[2024-07-26 13:08:46.846511] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:06.483 passed 00:09:06.483 Test: generate copy: buffer alignment validate ...passed 00:09:06.483 00:09:06.483 Run Summary: Type Total Ran Passed Failed Inactive 00:09:06.483 suites 1 1 n/a 0 0 00:09:06.483 tests 26 26 26 0 0 00:09:06.483 asserts 115 115 115 0 n/a 00:09:06.483 00:09:06.483 Elapsed time = 0.002 seconds 00:09:06.744 00:09:06.744 real 0m0.530s 00:09:06.744 user 0m0.669s 00:09:06.744 sys 0m0.205s 00:09:06.744 13:08:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.744 13:08:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:09:06.744 ************************************ 00:09:06.744 END TEST accel_dif_functional_tests 00:09:06.744 ************************************ 00:09:06.744 00:09:06.744 real 0m51.574s 00:09:06.744 user 0m59.902s 00:09:06.744 sys 0m11.301s 00:09:06.744 13:08:47 accel -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.744 13:08:47 accel -- common/autotest_common.sh@10 -- # set +x 00:09:06.744 ************************************ 00:09:06.744 END TEST accel 00:09:06.744 ************************************ 00:09:06.744 13:08:47 -- spdk/autotest.sh@186 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:06.744 13:08:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.744 13:08:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.744 13:08:47 -- common/autotest_common.sh@10 -- # set +x 00:09:06.744 ************************************ 00:09:06.744 START TEST accel_rpc 00:09:06.744 ************************************ 00:09:06.744 13:08:47 accel_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:09:06.744 * Looking for test storage... 00:09:06.744 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:09:06.744 13:08:47 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:06.744 13:08:47 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=626997 00:09:06.744 13:08:47 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 626997 00:09:06.744 13:08:47 accel_rpc -- common/autotest_common.sh@831 -- # '[' -z 626997 ']' 00:09:06.744 13:08:47 accel_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.744 13:08:47 accel_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.744 13:08:47 accel_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.744 13:08:47 accel_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.744 13:08:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.744 13:08:47 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:07.004 [2024-07-26 13:08:47.333086] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:09:07.004 [2024-07-26 13:08:47.333162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid626997 ] 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:07.004 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.004 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:07.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.005 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:07.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:07.005 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:07.005 [2024-07-26 13:08:47.465859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.264 [2024-07-26 13:08:47.553148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.835 13:08:48 accel_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.835 13:08:48 accel_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:07.835 13:08:48 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:07.835 13:08:48 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:07.835 13:08:48 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:07.835 13:08:48 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:07.835 13:08:48 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:07.835 13:08:48 accel_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:07.835 13:08:48 accel_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.835 13:08:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.835 ************************************ 00:09:07.835 START TEST accel_assign_opcode 00:09:07.835 ************************************ 00:09:07.835 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # accel_assign_opcode_test_suite 00:09:07.835 13:08:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:07.835 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.835 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:07.835 [2024-07-26 13:08:48.247299] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:07.835 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.835 13:08:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:07.835 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.835 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:07.835 [2024-07-26 13:08:48.255315] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:07.835 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.835 13:08:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:07.835 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.835 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:08.095 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.095 13:08:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:08.095 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.095 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:08.095 13:08:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:08.095 13:08:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:09:08.095 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.095 software 00:09:08.095 00:09:08.095 real 0m0.270s 00:09:08.095 user 0m0.038s 00:09:08.095 sys 0m0.016s 00:09:08.095 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.095 13:08:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:09:08.095 ************************************ 00:09:08.095 END TEST accel_assign_opcode 00:09:08.095 ************************************ 00:09:08.095 13:08:48 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 626997 00:09:08.095 13:08:48 accel_rpc -- common/autotest_common.sh@950 -- # '[' -z 626997 ']' 00:09:08.095 13:08:48 accel_rpc -- common/autotest_common.sh@954 -- # kill -0 626997 00:09:08.095 13:08:48 accel_rpc -- common/autotest_common.sh@955 -- # uname 00:09:08.095 13:08:48 accel_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.095 13:08:48 accel_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 626997 00:09:08.095 13:08:48 accel_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.095 13:08:48 accel_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.095 13:08:48 accel_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 626997' 00:09:08.095 killing process with pid 626997 00:09:08.095 13:08:48 accel_rpc -- common/autotest_common.sh@969 -- # kill 626997 00:09:08.095 13:08:48 accel_rpc -- common/autotest_common.sh@974 -- # wait 626997 00:09:08.665 00:09:08.665 real 0m1.785s 00:09:08.665 user 0m1.806s 00:09:08.665 sys 0m0.577s 00:09:08.665 13:08:48 accel_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.665 13:08:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.665 ************************************ 00:09:08.665 END TEST accel_rpc 00:09:08.665 ************************************ 00:09:08.665 13:08:48 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:09:08.665 13:08:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.665 13:08:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.665 13:08:48 -- common/autotest_common.sh@10 -- # set +x 00:09:08.665 ************************************ 00:09:08.665 START TEST app_cmdline 00:09:08.665 ************************************ 00:09:08.665 13:08:49 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:09:08.665 * Looking for test storage... 00:09:08.665 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:09:08.665 13:08:49 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:08.665 13:08:49 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:08.665 13:08:49 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=627342 00:09:08.665 13:08:49 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 627342 00:09:08.665 13:08:49 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 627342 ']' 00:09:08.665 13:08:49 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.665 13:08:49 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.665 13:08:49 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.665 13:08:49 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.665 13:08:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:08.665 [2024-07-26 13:08:49.173243] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:09:08.665 [2024-07-26 13:08:49.173307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627342 ] 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:08.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:08.925 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:08.925 [2024-07-26 13:08:49.306509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.926 [2024-07-26 13:08:49.392875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:09:09.865 13:08:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:09.865 { 00:09:09.865 "version": "SPDK v24.09-pre git sha1 79c77cd86", 00:09:09.865 "fields": { 00:09:09.865 "major": 24, 00:09:09.865 "minor": 9, 00:09:09.865 "patch": 0, 00:09:09.865 "suffix": "-pre", 00:09:09.865 "commit": "79c77cd86" 00:09:09.865 } 00:09:09.865 } 00:09:09.865 13:08:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:09.865 13:08:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:09.865 13:08:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:09.865 13:08:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:09.865 13:08:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:09.865 13:08:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:09.865 13:08:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.865 13:08:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:09.865 13:08:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:09.865 13:08:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:09:09.865 13:08:50 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:10.125 request: 00:09:10.125 { 00:09:10.125 "method": "env_dpdk_get_mem_stats", 00:09:10.125 "req_id": 1 00:09:10.125 } 00:09:10.125 Got JSON-RPC error response 00:09:10.125 response: 00:09:10.125 { 00:09:10.125 "code": -32601, 00:09:10.125 "message": "Method not found" 00:09:10.125 } 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.125 13:08:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 627342 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 627342 ']' 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 627342 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 627342 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 627342' 00:09:10.125 killing process with pid 627342 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@969 -- # kill 627342 00:09:10.125 13:08:50 app_cmdline -- common/autotest_common.sh@974 -- # wait 627342 00:09:10.693 00:09:10.693 real 0m1.939s 00:09:10.693 user 0m2.320s 00:09:10.693 sys 0m0.589s 00:09:10.693 13:08:50 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.693 13:08:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:10.694 ************************************ 00:09:10.694 END TEST app_cmdline 00:09:10.694 ************************************ 00:09:10.694 13:08:51 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:09:10.694 13:08:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:10.694 13:08:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.694 13:08:51 -- common/autotest_common.sh@10 -- # set +x 00:09:10.694 ************************************ 00:09:10.694 START TEST version 00:09:10.694 ************************************ 00:09:10.694 13:08:51 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:09:10.694 * Looking for test storage... 00:09:10.694 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:09:10.694 13:08:51 version -- app/version.sh@17 -- # get_header_version major 00:09:10.694 13:08:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:09:10.694 13:08:51 version -- app/version.sh@14 -- # cut -f2 00:09:10.694 13:08:51 version -- app/version.sh@14 -- # tr -d '"' 00:09:10.694 13:08:51 version -- app/version.sh@17 -- # major=24 00:09:10.694 13:08:51 version -- app/version.sh@18 -- # get_header_version minor 00:09:10.694 13:08:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:09:10.694 13:08:51 version -- app/version.sh@14 -- # cut -f2 00:09:10.694 13:08:51 version -- app/version.sh@14 -- # tr -d '"' 00:09:10.694 13:08:51 version -- app/version.sh@18 -- # minor=9 00:09:10.694 13:08:51 version -- app/version.sh@19 -- # get_header_version patch 00:09:10.694 13:08:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:09:10.694 13:08:51 version -- app/version.sh@14 -- # cut -f2 00:09:10.694 13:08:51 version -- app/version.sh@14 -- # tr -d '"' 00:09:10.694 13:08:51 version -- app/version.sh@19 -- # patch=0 00:09:10.694 13:08:51 version -- app/version.sh@20 -- # get_header_version suffix 00:09:10.694 13:08:51 version -- app/version.sh@14 -- # cut -f2 00:09:10.694 13:08:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:09:10.694 13:08:51 version -- app/version.sh@14 -- # tr -d '"' 00:09:10.694 13:08:51 version -- app/version.sh@20 -- # suffix=-pre 00:09:10.694 13:08:51 version -- app/version.sh@22 -- # version=24.9 00:09:10.694 13:08:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:10.694 13:08:51 version -- app/version.sh@28 -- # version=24.9rc0 00:09:10.694 13:08:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:09:10.694 13:08:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:10.953 13:08:51 version -- app/version.sh@30 -- # py_version=24.9rc0 00:09:10.953 13:08:51 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:09:10.953 00:09:10.953 real 0m0.194s 00:09:10.953 user 0m0.085s 00:09:10.953 sys 0m0.155s 00:09:10.953 13:08:51 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.953 13:08:51 version -- common/autotest_common.sh@10 -- # set +x 00:09:10.953 ************************************ 00:09:10.953 END TEST version 00:09:10.953 ************************************ 00:09:10.953 13:08:51 -- spdk/autotest.sh@192 -- # '[' 1 -eq 1 ']' 00:09:10.953 13:08:51 -- spdk/autotest.sh@193 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:09:10.953 13:08:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:10.953 13:08:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.953 13:08:51 -- common/autotest_common.sh@10 -- # set +x 00:09:10.953 ************************************ 00:09:10.953 START TEST blockdev_general 00:09:10.953 ************************************ 00:09:10.953 13:08:51 blockdev_general -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:09:10.953 * Looking for test storage... 00:09:10.953 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:10.953 13:08:51 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=627980 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:09:10.953 13:08:51 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 627980 00:09:10.953 13:08:51 blockdev_general -- common/autotest_common.sh@831 -- # '[' -z 627980 ']' 00:09:10.953 13:08:51 blockdev_general -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.953 13:08:51 blockdev_general -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.953 13:08:51 blockdev_general -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.953 13:08:51 blockdev_general -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.953 13:08:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:11.212 [2024-07-26 13:08:51.503408] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:09:11.212 [2024-07-26 13:08:51.503471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid627980 ] 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:11.212 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:11.212 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:11.212 [2024-07-26 13:08:51.635130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.212 [2024-07-26 13:08:51.717664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.190 13:08:52 blockdev_general -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.190 13:08:52 blockdev_general -- common/autotest_common.sh@864 -- # return 0 00:09:12.190 13:08:52 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:09:12.190 13:08:52 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:09:12.190 13:08:52 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:09:12.190 13:08:52 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.190 13:08:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.190 [2024-07-26 13:08:52.639324] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:12.190 [2024-07-26 13:08:52.639380] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:12.190 00:09:12.190 [2024-07-26 13:08:52.647313] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:12.190 [2024-07-26 13:08:52.647337] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:12.190 00:09:12.190 Malloc0 00:09:12.190 Malloc1 00:09:12.477 Malloc2 00:09:12.477 Malloc3 00:09:12.477 Malloc4 00:09:12.477 Malloc5 00:09:12.477 Malloc6 00:09:12.477 Malloc7 00:09:12.477 Malloc8 00:09:12.477 Malloc9 00:09:12.477 [2024-07-26 13:08:52.782527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:12.477 [2024-07-26 13:08:52.782571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.477 [2024-07-26 13:08:52.782590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc91850 00:09:12.477 [2024-07-26 13:08:52.782601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.477 [2024-07-26 13:08:52.783835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.477 [2024-07-26 13:08:52.783861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:12.477 TestPT 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.477 13:08:52 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:09:12.477 5000+0 records in 00:09:12.477 5000+0 records out 00:09:12.477 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0256216 s, 400 MB/s 00:09:12.477 13:08:52 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.477 AIO0 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.477 13:08:52 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.477 13:08:52 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:09:12.477 13:08:52 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.477 13:08:52 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.477 13:08:52 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.477 13:08:52 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:09:12.477 13:08:52 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.477 13:08:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:12.477 13:08:52 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:09:12.737 13:08:53 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.737 13:08:53 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:09:12.737 13:08:53 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:09:12.738 13:08:53 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2c46e97c-57aa-4658-8a74-e88dbe1d5807"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2c46e97c-57aa-4658-8a74-e88dbe1d5807",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "5b06c5a7-0e33-5c30-bc74-dca713720c77"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5b06c5a7-0e33-5c30-bc74-dca713720c77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "e8a9e14b-51ed-5389-9f4f-a35090b56e0b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "e8a9e14b-51ed-5389-9f4f-a35090b56e0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "9d709408-1879-5b65-a057-a3583b8ba1c5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9d709408-1879-5b65-a057-a3583b8ba1c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "e19f1637-f937-5af5-a533-800b442131c7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e19f1637-f937-5af5-a533-800b442131c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f593c122-ae0b-54f7-a669-7e7cbe0197a5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f593c122-ae0b-54f7-a669-7e7cbe0197a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "7ceee860-167d-573d-89a7-aee7fa0988ad"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7ceee860-167d-573d-89a7-aee7fa0988ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "8949cbd7-c94c-5f11-8318-381f3ee383ab"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8949cbd7-c94c-5f11-8318-381f3ee383ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "dbe91e20-c295-5a7a-b664-0eb050d96a0a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dbe91e20-c295-5a7a-b664-0eb050d96a0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d43793d5-1d9d-5134-aaeb-e10da8aae7a8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d43793d5-1d9d-5134-aaeb-e10da8aae7a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "b36ac161-8c07-592f-bceb-acc9c48c3244"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b36ac161-8c07-592f-bceb-acc9c48c3244",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b75803da-aba6-53b7-bab4-f524a2039f81"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b75803da-aba6-53b7-bab4-f524a2039f81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "db10fa74-5f2c-464f-8ac7-2e48b84559ae"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "db10fa74-5f2c-464f-8ac7-2e48b84559ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "db10fa74-5f2c-464f-8ac7-2e48b84559ae",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "37fa96eb-e3c1-4aa8-93a3-88c25d02f483",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "7be745ab-a5b2-4498-96ed-da235dfe4c92",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "ce664658-b044-40e3-aff3-548a5e790e8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "ce664658-b044-40e3-aff3-548a5e790e8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ce664658-b044-40e3-aff3-548a5e790e8b",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "475ea17c-0754-49da-bb35-4e973cdbedbb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "911d2080-bd0a-408c-a6dc-2bed310c3ed0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "4ec05c3f-6930-4e0b-8221-f9f082d79464"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4ec05c3f-6930-4e0b-8221-f9f082d79464",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4ec05c3f-6930-4e0b-8221-f9f082d79464",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "58b6d072-4453-4743-933e-21d40f584816",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b519b972-c9f7-4466-a38e-831d1165821a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b6f7a565-a505-4fc4-ba0f-f19e391292d6"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b6f7a565-a505-4fc4-ba0f-f19e391292d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:09:12.738 13:08:53 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:09:12.738 13:08:53 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:09:12.738 13:08:53 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:09:12.738 13:08:53 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 627980 00:09:12.738 13:08:53 blockdev_general -- common/autotest_common.sh@950 -- # '[' -z 627980 ']' 00:09:12.738 13:08:53 blockdev_general -- common/autotest_common.sh@954 -- # kill -0 627980 00:09:12.738 13:08:53 blockdev_general -- common/autotest_common.sh@955 -- # uname 00:09:12.738 13:08:53 blockdev_general -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.738 13:08:53 blockdev_general -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 627980 00:09:12.996 13:08:53 blockdev_general -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.996 13:08:53 blockdev_general -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.996 13:08:53 blockdev_general -- common/autotest_common.sh@968 -- # echo 'killing process with pid 627980' 00:09:12.996 killing process with pid 627980 00:09:12.996 13:08:53 blockdev_general -- common/autotest_common.sh@969 -- # kill 627980 00:09:12.996 13:08:53 blockdev_general -- common/autotest_common.sh@974 -- # wait 627980 00:09:13.254 13:08:53 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:13.254 13:08:53 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:09:13.254 13:08:53 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:13.254 13:08:53 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.254 13:08:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:13.254 ************************************ 00:09:13.254 START TEST bdev_hello_world 00:09:13.254 ************************************ 00:09:13.254 13:08:53 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:09:13.512 [2024-07-26 13:08:53.793701] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:09:13.512 [2024-07-26 13:08:53.793761] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628282 ] 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:13.512 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.512 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:13.512 [2024-07-26 13:08:53.924799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.512 [2024-07-26 13:08:54.008280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.771 [2024-07-26 13:08:54.168159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:13.771 [2024-07-26 13:08:54.168209] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:13.771 [2024-07-26 13:08:54.168223] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:13.771 [2024-07-26 13:08:54.176167] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:13.771 [2024-07-26 13:08:54.176210] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:13.771 [2024-07-26 13:08:54.184175] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:13.771 [2024-07-26 13:08:54.184202] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:13.771 [2024-07-26 13:08:54.255800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:13.771 [2024-07-26 13:08:54.255849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.771 [2024-07-26 13:08:54.255864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbb2690 00:09:13.771 [2024-07-26 13:08:54.255875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.771 [2024-07-26 13:08:54.257161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.771 [2024-07-26 13:08:54.257191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:14.029 [2024-07-26 13:08:54.389672] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:14.029 [2024-07-26 13:08:54.389742] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:09:14.029 [2024-07-26 13:08:54.389796] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:14.029 [2024-07-26 13:08:54.389871] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:14.029 [2024-07-26 13:08:54.389948] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:14.029 [2024-07-26 13:08:54.389978] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:14.029 [2024-07-26 13:08:54.390041] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:14.029 00:09:14.029 [2024-07-26 13:08:54.390082] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:14.287 00:09:14.287 real 0m0.927s 00:09:14.287 user 0m0.594s 00:09:14.287 sys 0m0.293s 00:09:14.287 13:08:54 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.287 13:08:54 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:14.287 ************************************ 00:09:14.287 END TEST bdev_hello_world 00:09:14.287 ************************************ 00:09:14.287 13:08:54 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:14.287 13:08:54 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:14.287 13:08:54 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.287 13:08:54 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:14.287 ************************************ 00:09:14.287 START TEST bdev_bounds 00:09:14.287 ************************************ 00:09:14.287 13:08:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:09:14.287 13:08:54 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=628558 00:09:14.288 13:08:54 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:14.288 13:08:54 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:09:14.288 13:08:54 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 628558' 00:09:14.288 Process bdevio pid: 628558 00:09:14.288 13:08:54 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 628558 00:09:14.288 13:08:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 628558 ']' 00:09:14.288 13:08:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.288 13:08:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.288 13:08:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.288 13:08:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.288 13:08:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:14.288 [2024-07-26 13:08:54.810977] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:09:14.288 [2024-07-26 13:08:54.811035] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid628558 ] 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:14.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.546 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:14.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.547 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:14.547 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:14.547 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:14.547 [2024-07-26 13:08:54.943076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:14.547 [2024-07-26 13:08:55.031706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.547 [2024-07-26 13:08:55.031799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.547 [2024-07-26 13:08:55.031803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.805 [2024-07-26 13:08:55.177158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:14.805 [2024-07-26 13:08:55.177203] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:14.805 [2024-07-26 13:08:55.177216] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:14.805 [2024-07-26 13:08:55.185172] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:14.805 [2024-07-26 13:08:55.185198] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:14.805 [2024-07-26 13:08:55.193183] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:14.805 [2024-07-26 13:08:55.193206] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:14.805 [2024-07-26 13:08:55.264552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:14.805 [2024-07-26 13:08:55.264599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.805 [2024-07-26 13:08:55.264614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1694080 00:09:14.805 [2024-07-26 13:08:55.264626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.805 [2024-07-26 13:08:55.265979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.805 [2024-07-26 13:08:55.266007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:15.371 13:08:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.371 13:08:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:09:15.371 13:08:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:15.371 I/O targets: 00:09:15.371 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:09:15.371 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:09:15.371 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:09:15.371 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:09:15.371 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:09:15.371 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:09:15.371 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:09:15.371 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:09:15.371 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:09:15.371 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:09:15.371 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:09:15.371 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:09:15.371 raid0: 131072 blocks of 512 bytes (64 MiB) 00:09:15.371 concat0: 131072 blocks of 512 bytes (64 MiB) 00:09:15.371 raid1: 65536 blocks of 512 bytes (32 MiB) 00:09:15.371 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:09:15.371 00:09:15.371 00:09:15.371 CUnit - A unit testing framework for C - Version 2.1-3 00:09:15.371 http://cunit.sourceforge.net/ 00:09:15.371 00:09:15.371 00:09:15.371 Suite: bdevio tests on: AIO0 00:09:15.371 Test: blockdev write read block ...passed 00:09:15.371 Test: blockdev write zeroes read block ...passed 00:09:15.371 Test: blockdev write zeroes read no split ...passed 00:09:15.371 Test: blockdev write zeroes read split ...passed 00:09:15.371 Test: blockdev write zeroes read split partial ...passed 00:09:15.371 Test: blockdev reset ...passed 00:09:15.371 Test: blockdev write read 8 blocks ...passed 00:09:15.371 Test: blockdev write read size > 128k ...passed 00:09:15.371 Test: blockdev write read invalid size ...passed 00:09:15.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.371 Test: blockdev write read max offset ...passed 00:09:15.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.371 Test: blockdev writev readv 8 blocks ...passed 00:09:15.371 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.371 Test: blockdev writev readv block ...passed 00:09:15.371 Test: blockdev writev readv size > 128k ...passed 00:09:15.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.371 Test: blockdev comparev and writev ...passed 00:09:15.372 Test: blockdev nvme passthru rw ...passed 00:09:15.372 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.372 Test: blockdev nvme admin passthru ...passed 00:09:15.372 Test: blockdev copy ...passed 00:09:15.372 Suite: bdevio tests on: raid1 00:09:15.372 Test: blockdev write read block ...passed 00:09:15.372 Test: blockdev write zeroes read block ...passed 00:09:15.372 Test: blockdev write zeroes read no split ...passed 00:09:15.372 Test: blockdev write zeroes read split ...passed 00:09:15.372 Test: blockdev write zeroes read split partial ...passed 00:09:15.372 Test: blockdev reset ...passed 00:09:15.372 Test: blockdev write read 8 blocks ...passed 00:09:15.372 Test: blockdev write read size > 128k ...passed 00:09:15.372 Test: blockdev write read invalid size ...passed 00:09:15.372 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.372 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.372 Test: blockdev write read max offset ...passed 00:09:15.372 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.372 Test: blockdev writev readv 8 blocks ...passed 00:09:15.372 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.372 Test: blockdev writev readv block ...passed 00:09:15.372 Test: blockdev writev readv size > 128k ...passed 00:09:15.372 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.372 Test: blockdev comparev and writev ...passed 00:09:15.372 Test: blockdev nvme passthru rw ...passed 00:09:15.372 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.372 Test: blockdev nvme admin passthru ...passed 00:09:15.372 Test: blockdev copy ...passed 00:09:15.372 Suite: bdevio tests on: concat0 00:09:15.372 Test: blockdev write read block ...passed 00:09:15.372 Test: blockdev write zeroes read block ...passed 00:09:15.372 Test: blockdev write zeroes read no split ...passed 00:09:15.372 Test: blockdev write zeroes read split ...passed 00:09:15.372 Test: blockdev write zeroes read split partial ...passed 00:09:15.372 Test: blockdev reset ...passed 00:09:15.372 Test: blockdev write read 8 blocks ...passed 00:09:15.372 Test: blockdev write read size > 128k ...passed 00:09:15.372 Test: blockdev write read invalid size ...passed 00:09:15.372 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.372 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.372 Test: blockdev write read max offset ...passed 00:09:15.372 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.372 Test: blockdev writev readv 8 blocks ...passed 00:09:15.372 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.372 Test: blockdev writev readv block ...passed 00:09:15.372 Test: blockdev writev readv size > 128k ...passed 00:09:15.372 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.372 Test: blockdev comparev and writev ...passed 00:09:15.372 Test: blockdev nvme passthru rw ...passed 00:09:15.372 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.372 Test: blockdev nvme admin passthru ...passed 00:09:15.372 Test: blockdev copy ...passed 00:09:15.372 Suite: bdevio tests on: raid0 00:09:15.372 Test: blockdev write read block ...passed 00:09:15.372 Test: blockdev write zeroes read block ...passed 00:09:15.372 Test: blockdev write zeroes read no split ...passed 00:09:15.372 Test: blockdev write zeroes read split ...passed 00:09:15.372 Test: blockdev write zeroes read split partial ...passed 00:09:15.372 Test: blockdev reset ...passed 00:09:15.372 Test: blockdev write read 8 blocks ...passed 00:09:15.372 Test: blockdev write read size > 128k ...passed 00:09:15.372 Test: blockdev write read invalid size ...passed 00:09:15.372 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.372 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.372 Test: blockdev write read max offset ...passed 00:09:15.372 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.372 Test: blockdev writev readv 8 blocks ...passed 00:09:15.372 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.372 Test: blockdev writev readv block ...passed 00:09:15.372 Test: blockdev writev readv size > 128k ...passed 00:09:15.372 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.372 Test: blockdev comparev and writev ...passed 00:09:15.372 Test: blockdev nvme passthru rw ...passed 00:09:15.372 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.372 Test: blockdev nvme admin passthru ...passed 00:09:15.372 Test: blockdev copy ...passed 00:09:15.372 Suite: bdevio tests on: TestPT 00:09:15.372 Test: blockdev write read block ...passed 00:09:15.372 Test: blockdev write zeroes read block ...passed 00:09:15.372 Test: blockdev write zeroes read no split ...passed 00:09:15.372 Test: blockdev write zeroes read split ...passed 00:09:15.631 Test: blockdev write zeroes read split partial ...passed 00:09:15.631 Test: blockdev reset ...passed 00:09:15.631 Test: blockdev write read 8 blocks ...passed 00:09:15.631 Test: blockdev write read size > 128k ...passed 00:09:15.631 Test: blockdev write read invalid size ...passed 00:09:15.631 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.631 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.631 Test: blockdev write read max offset ...passed 00:09:15.631 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.631 Test: blockdev writev readv 8 blocks ...passed 00:09:15.631 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.631 Test: blockdev writev readv block ...passed 00:09:15.631 Test: blockdev writev readv size > 128k ...passed 00:09:15.631 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.631 Test: blockdev comparev and writev ...passed 00:09:15.631 Test: blockdev nvme passthru rw ...passed 00:09:15.631 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.631 Test: blockdev nvme admin passthru ...passed 00:09:15.631 Test: blockdev copy ...passed 00:09:15.631 Suite: bdevio tests on: Malloc2p7 00:09:15.631 Test: blockdev write read block ...passed 00:09:15.631 Test: blockdev write zeroes read block ...passed 00:09:15.631 Test: blockdev write zeroes read no split ...passed 00:09:15.631 Test: blockdev write zeroes read split ...passed 00:09:15.631 Test: blockdev write zeroes read split partial ...passed 00:09:15.631 Test: blockdev reset ...passed 00:09:15.631 Test: blockdev write read 8 blocks ...passed 00:09:15.631 Test: blockdev write read size > 128k ...passed 00:09:15.632 Test: blockdev write read invalid size ...passed 00:09:15.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.632 Test: blockdev write read max offset ...passed 00:09:15.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.632 Test: blockdev writev readv 8 blocks ...passed 00:09:15.632 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.632 Test: blockdev writev readv block ...passed 00:09:15.632 Test: blockdev writev readv size > 128k ...passed 00:09:15.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.632 Test: blockdev comparev and writev ...passed 00:09:15.632 Test: blockdev nvme passthru rw ...passed 00:09:15.632 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.632 Test: blockdev nvme admin passthru ...passed 00:09:15.632 Test: blockdev copy ...passed 00:09:15.632 Suite: bdevio tests on: Malloc2p6 00:09:15.632 Test: blockdev write read block ...passed 00:09:15.632 Test: blockdev write zeroes read block ...passed 00:09:15.632 Test: blockdev write zeroes read no split ...passed 00:09:15.632 Test: blockdev write zeroes read split ...passed 00:09:15.632 Test: blockdev write zeroes read split partial ...passed 00:09:15.632 Test: blockdev reset ...passed 00:09:15.632 Test: blockdev write read 8 blocks ...passed 00:09:15.632 Test: blockdev write read size > 128k ...passed 00:09:15.632 Test: blockdev write read invalid size ...passed 00:09:15.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.632 Test: blockdev write read max offset ...passed 00:09:15.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.632 Test: blockdev writev readv 8 blocks ...passed 00:09:15.632 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.632 Test: blockdev writev readv block ...passed 00:09:15.632 Test: blockdev writev readv size > 128k ...passed 00:09:15.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.632 Test: blockdev comparev and writev ...passed 00:09:15.632 Test: blockdev nvme passthru rw ...passed 00:09:15.632 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.632 Test: blockdev nvme admin passthru ...passed 00:09:15.632 Test: blockdev copy ...passed 00:09:15.632 Suite: bdevio tests on: Malloc2p5 00:09:15.632 Test: blockdev write read block ...passed 00:09:15.632 Test: blockdev write zeroes read block ...passed 00:09:15.632 Test: blockdev write zeroes read no split ...passed 00:09:15.632 Test: blockdev write zeroes read split ...passed 00:09:15.632 Test: blockdev write zeroes read split partial ...passed 00:09:15.632 Test: blockdev reset ...passed 00:09:15.632 Test: blockdev write read 8 blocks ...passed 00:09:15.632 Test: blockdev write read size > 128k ...passed 00:09:15.632 Test: blockdev write read invalid size ...passed 00:09:15.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.632 Test: blockdev write read max offset ...passed 00:09:15.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.632 Test: blockdev writev readv 8 blocks ...passed 00:09:15.632 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.632 Test: blockdev writev readv block ...passed 00:09:15.632 Test: blockdev writev readv size > 128k ...passed 00:09:15.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.632 Test: blockdev comparev and writev ...passed 00:09:15.632 Test: blockdev nvme passthru rw ...passed 00:09:15.632 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.632 Test: blockdev nvme admin passthru ...passed 00:09:15.632 Test: blockdev copy ...passed 00:09:15.632 Suite: bdevio tests on: Malloc2p4 00:09:15.632 Test: blockdev write read block ...passed 00:09:15.632 Test: blockdev write zeroes read block ...passed 00:09:15.632 Test: blockdev write zeroes read no split ...passed 00:09:15.632 Test: blockdev write zeroes read split ...passed 00:09:15.632 Test: blockdev write zeroes read split partial ...passed 00:09:15.632 Test: blockdev reset ...passed 00:09:15.632 Test: blockdev write read 8 blocks ...passed 00:09:15.632 Test: blockdev write read size > 128k ...passed 00:09:15.632 Test: blockdev write read invalid size ...passed 00:09:15.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.632 Test: blockdev write read max offset ...passed 00:09:15.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.632 Test: blockdev writev readv 8 blocks ...passed 00:09:15.632 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.632 Test: blockdev writev readv block ...passed 00:09:15.632 Test: blockdev writev readv size > 128k ...passed 00:09:15.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.632 Test: blockdev comparev and writev ...passed 00:09:15.632 Test: blockdev nvme passthru rw ...passed 00:09:15.632 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.632 Test: blockdev nvme admin passthru ...passed 00:09:15.632 Test: blockdev copy ...passed 00:09:15.632 Suite: bdevio tests on: Malloc2p3 00:09:15.632 Test: blockdev write read block ...passed 00:09:15.632 Test: blockdev write zeroes read block ...passed 00:09:15.632 Test: blockdev write zeroes read no split ...passed 00:09:15.632 Test: blockdev write zeroes read split ...passed 00:09:15.632 Test: blockdev write zeroes read split partial ...passed 00:09:15.632 Test: blockdev reset ...passed 00:09:15.632 Test: blockdev write read 8 blocks ...passed 00:09:15.632 Test: blockdev write read size > 128k ...passed 00:09:15.632 Test: blockdev write read invalid size ...passed 00:09:15.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.632 Test: blockdev write read max offset ...passed 00:09:15.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.632 Test: blockdev writev readv 8 blocks ...passed 00:09:15.632 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.632 Test: blockdev writev readv block ...passed 00:09:15.632 Test: blockdev writev readv size > 128k ...passed 00:09:15.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.632 Test: blockdev comparev and writev ...passed 00:09:15.632 Test: blockdev nvme passthru rw ...passed 00:09:15.632 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.632 Test: blockdev nvme admin passthru ...passed 00:09:15.632 Test: blockdev copy ...passed 00:09:15.632 Suite: bdevio tests on: Malloc2p2 00:09:15.632 Test: blockdev write read block ...passed 00:09:15.632 Test: blockdev write zeroes read block ...passed 00:09:15.632 Test: blockdev write zeroes read no split ...passed 00:09:15.632 Test: blockdev write zeroes read split ...passed 00:09:15.632 Test: blockdev write zeroes read split partial ...passed 00:09:15.632 Test: blockdev reset ...passed 00:09:15.632 Test: blockdev write read 8 blocks ...passed 00:09:15.632 Test: blockdev write read size > 128k ...passed 00:09:15.632 Test: blockdev write read invalid size ...passed 00:09:15.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.632 Test: blockdev write read max offset ...passed 00:09:15.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.632 Test: blockdev writev readv 8 blocks ...passed 00:09:15.632 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.632 Test: blockdev writev readv block ...passed 00:09:15.632 Test: blockdev writev readv size > 128k ...passed 00:09:15.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.632 Test: blockdev comparev and writev ...passed 00:09:15.632 Test: blockdev nvme passthru rw ...passed 00:09:15.632 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.632 Test: blockdev nvme admin passthru ...passed 00:09:15.632 Test: blockdev copy ...passed 00:09:15.632 Suite: bdevio tests on: Malloc2p1 00:09:15.632 Test: blockdev write read block ...passed 00:09:15.632 Test: blockdev write zeroes read block ...passed 00:09:15.632 Test: blockdev write zeroes read no split ...passed 00:09:15.632 Test: blockdev write zeroes read split ...passed 00:09:15.632 Test: blockdev write zeroes read split partial ...passed 00:09:15.632 Test: blockdev reset ...passed 00:09:15.632 Test: blockdev write read 8 blocks ...passed 00:09:15.632 Test: blockdev write read size > 128k ...passed 00:09:15.632 Test: blockdev write read invalid size ...passed 00:09:15.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.632 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.632 Test: blockdev write read max offset ...passed 00:09:15.632 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.632 Test: blockdev writev readv 8 blocks ...passed 00:09:15.632 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.632 Test: blockdev writev readv block ...passed 00:09:15.632 Test: blockdev writev readv size > 128k ...passed 00:09:15.632 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.632 Test: blockdev comparev and writev ...passed 00:09:15.632 Test: blockdev nvme passthru rw ...passed 00:09:15.632 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.632 Test: blockdev nvme admin passthru ...passed 00:09:15.632 Test: blockdev copy ...passed 00:09:15.632 Suite: bdevio tests on: Malloc2p0 00:09:15.632 Test: blockdev write read block ...passed 00:09:15.632 Test: blockdev write zeroes read block ...passed 00:09:15.632 Test: blockdev write zeroes read no split ...passed 00:09:15.632 Test: blockdev write zeroes read split ...passed 00:09:15.632 Test: blockdev write zeroes read split partial ...passed 00:09:15.632 Test: blockdev reset ...passed 00:09:15.632 Test: blockdev write read 8 blocks ...passed 00:09:15.632 Test: blockdev write read size > 128k ...passed 00:09:15.632 Test: blockdev write read invalid size ...passed 00:09:15.632 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.633 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.633 Test: blockdev write read max offset ...passed 00:09:15.633 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.633 Test: blockdev writev readv 8 blocks ...passed 00:09:15.633 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.633 Test: blockdev writev readv block ...passed 00:09:15.633 Test: blockdev writev readv size > 128k ...passed 00:09:15.633 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.633 Test: blockdev comparev and writev ...passed 00:09:15.633 Test: blockdev nvme passthru rw ...passed 00:09:15.633 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.633 Test: blockdev nvme admin passthru ...passed 00:09:15.633 Test: blockdev copy ...passed 00:09:15.633 Suite: bdevio tests on: Malloc1p1 00:09:15.633 Test: blockdev write read block ...passed 00:09:15.633 Test: blockdev write zeroes read block ...passed 00:09:15.633 Test: blockdev write zeroes read no split ...passed 00:09:15.633 Test: blockdev write zeroes read split ...passed 00:09:15.633 Test: blockdev write zeroes read split partial ...passed 00:09:15.633 Test: blockdev reset ...passed 00:09:15.633 Test: blockdev write read 8 blocks ...passed 00:09:15.633 Test: blockdev write read size > 128k ...passed 00:09:15.633 Test: blockdev write read invalid size ...passed 00:09:15.633 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.633 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.633 Test: blockdev write read max offset ...passed 00:09:15.633 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.633 Test: blockdev writev readv 8 blocks ...passed 00:09:15.633 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.633 Test: blockdev writev readv block ...passed 00:09:15.633 Test: blockdev writev readv size > 128k ...passed 00:09:15.633 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.633 Test: blockdev comparev and writev ...passed 00:09:15.633 Test: blockdev nvme passthru rw ...passed 00:09:15.633 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.633 Test: blockdev nvme admin passthru ...passed 00:09:15.633 Test: blockdev copy ...passed 00:09:15.633 Suite: bdevio tests on: Malloc1p0 00:09:15.633 Test: blockdev write read block ...passed 00:09:15.633 Test: blockdev write zeroes read block ...passed 00:09:15.633 Test: blockdev write zeroes read no split ...passed 00:09:15.633 Test: blockdev write zeroes read split ...passed 00:09:15.633 Test: blockdev write zeroes read split partial ...passed 00:09:15.633 Test: blockdev reset ...passed 00:09:15.633 Test: blockdev write read 8 blocks ...passed 00:09:15.633 Test: blockdev write read size > 128k ...passed 00:09:15.633 Test: blockdev write read invalid size ...passed 00:09:15.633 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.633 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.633 Test: blockdev write read max offset ...passed 00:09:15.633 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.633 Test: blockdev writev readv 8 blocks ...passed 00:09:15.633 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.633 Test: blockdev writev readv block ...passed 00:09:15.633 Test: blockdev writev readv size > 128k ...passed 00:09:15.633 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.633 Test: blockdev comparev and writev ...passed 00:09:15.633 Test: blockdev nvme passthru rw ...passed 00:09:15.633 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.633 Test: blockdev nvme admin passthru ...passed 00:09:15.633 Test: blockdev copy ...passed 00:09:15.633 Suite: bdevio tests on: Malloc0 00:09:15.633 Test: blockdev write read block ...passed 00:09:15.633 Test: blockdev write zeroes read block ...passed 00:09:15.633 Test: blockdev write zeroes read no split ...passed 00:09:15.633 Test: blockdev write zeroes read split ...passed 00:09:15.633 Test: blockdev write zeroes read split partial ...passed 00:09:15.633 Test: blockdev reset ...passed 00:09:15.633 Test: blockdev write read 8 blocks ...passed 00:09:15.633 Test: blockdev write read size > 128k ...passed 00:09:15.633 Test: blockdev write read invalid size ...passed 00:09:15.633 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:15.633 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:15.633 Test: blockdev write read max offset ...passed 00:09:15.633 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:15.633 Test: blockdev writev readv 8 blocks ...passed 00:09:15.633 Test: blockdev writev readv 30 x 1block ...passed 00:09:15.633 Test: blockdev writev readv block ...passed 00:09:15.633 Test: blockdev writev readv size > 128k ...passed 00:09:15.633 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:15.633 Test: blockdev comparev and writev ...passed 00:09:15.633 Test: blockdev nvme passthru rw ...passed 00:09:15.633 Test: blockdev nvme passthru vendor specific ...passed 00:09:15.633 Test: blockdev nvme admin passthru ...passed 00:09:15.633 Test: blockdev copy ...passed 00:09:15.633 00:09:15.633 Run Summary: Type Total Ran Passed Failed Inactive 00:09:15.633 suites 16 16 n/a 0 0 00:09:15.633 tests 368 368 368 0 0 00:09:15.633 asserts 2224 2224 2224 0 n/a 00:09:15.633 00:09:15.633 Elapsed time = 0.475 seconds 00:09:15.633 0 00:09:15.633 13:08:56 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 628558 00:09:15.633 13:08:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 628558 ']' 00:09:15.633 13:08:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 628558 00:09:15.633 13:08:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:09:15.633 13:08:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.633 13:08:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 628558 00:09:15.633 13:08:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.633 13:08:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.633 13:08:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 628558' 00:09:15.633 killing process with pid 628558 00:09:15.633 13:08:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@969 -- # kill 628558 00:09:15.633 13:08:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@974 -- # wait 628558 00:09:15.892 13:08:56 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:15.892 00:09:15.892 real 0m1.615s 00:09:15.892 user 0m4.040s 00:09:15.892 sys 0m0.481s 00:09:15.892 13:08:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:15.892 13:08:56 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:15.892 ************************************ 00:09:15.892 END TEST bdev_bounds 00:09:15.892 ************************************ 00:09:15.892 13:08:56 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:09:15.892 13:08:56 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:15.892 13:08:56 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:15.892 13:08:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:16.150 ************************************ 00:09:16.150 START TEST bdev_nbd 00:09:16.150 ************************************ 00:09:16.150 13:08:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:09:16.150 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:16.150 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:16.150 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.150 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:16.150 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=628857 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 628857 /var/tmp/spdk-nbd.sock 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 628857 ']' 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:16.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.151 13:08:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:16.151 [2024-07-26 13:08:56.519638] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:09:16.151 [2024-07-26 13:08:56.519694] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:16.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.151 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:16.151 [2024-07-26 13:08:56.652227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.410 [2024-07-26 13:08:56.739714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.410 [2024-07-26 13:08:56.898955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:16.410 [2024-07-26 13:08:56.899003] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:16.410 [2024-07-26 13:08:56.899016] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:16.410 [2024-07-26 13:08:56.906966] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:16.410 [2024-07-26 13:08:56.906991] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:09:16.410 [2024-07-26 13:08:56.914976] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:16.410 [2024-07-26 13:08:56.914999] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:09:16.668 [2024-07-26 13:08:56.986444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:16.668 [2024-07-26 13:08:56.986491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.668 [2024-07-26 13:08:56.986506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x189a150 00:09:16.668 [2024-07-26 13:08:56.986516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.668 [2024-07-26 13:08:56.987794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.668 [2024-07-26 13:08:56.987822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:16.926 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.185 1+0 records in 00:09:17.185 1+0 records out 00:09:17.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267109 s, 15.3 MB/s 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:17.185 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.444 1+0 records in 00:09:17.444 1+0 records out 00:09:17.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279619 s, 14.6 MB/s 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:17.444 13:08:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.702 1+0 records in 00:09:17.702 1+0 records out 00:09:17.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304247 s, 13.5 MB/s 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:17.702 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.961 1+0 records in 00:09:17.961 1+0 records out 00:09:17.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336014 s, 12.2 MB/s 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:17.961 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:18.219 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.478 1+0 records in 00:09:18.478 1+0 records out 00:09:18.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390638 s, 10.5 MB/s 00:09:18.478 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.478 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:18.478 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.478 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:18.478 13:08:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:18.478 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.478 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:18.478 13:08:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:09:18.478 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:18.478 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:18.738 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:18.738 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:09:18.738 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:18.738 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:18.738 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:18.738 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:09:18.738 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:18.738 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:18.738 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:18.738 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.738 1+0 records in 00:09:18.738 1+0 records out 00:09:18.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349605 s, 11.7 MB/s 00:09:18.738 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.738 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:18.739 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.739 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:18.739 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:18.739 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.739 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:18.739 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:18.997 1+0 records in 00:09:18.997 1+0 records out 00:09:18.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364277 s, 11.2 MB/s 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:18.997 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:19.257 1+0 records in 00:09:19.257 1+0 records out 00:09:19.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337764 s, 12.1 MB/s 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:19.257 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:09:19.516 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:09:19.516 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:09:19.516 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:09:19.516 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:09:19.516 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:19.516 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:19.516 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:19.516 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:09:19.516 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:19.516 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:19.516 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:19.517 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:19.517 1+0 records in 00:09:19.517 1+0 records out 00:09:19.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485876 s, 8.4 MB/s 00:09:19.517 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.517 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:19.517 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.517 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:19.517 13:08:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:19.517 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:19.517 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:19.517 13:08:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:19.775 1+0 records in 00:09:19.775 1+0 records out 00:09:19.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454749 s, 9.0 MB/s 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:19.775 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:20.341 1+0 records in 00:09:20.341 1+0 records out 00:09:20.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514177 s, 8.0 MB/s 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:20.341 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:20.599 1+0 records in 00:09:20.599 1+0 records out 00:09:20.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604294 s, 6.8 MB/s 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:20.599 13:09:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:20.857 1+0 records in 00:09:20.857 1+0 records out 00:09:20.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662509 s, 6.2 MB/s 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:20.857 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:21.115 1+0 records in 00:09:21.115 1+0 records out 00:09:21.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543189 s, 7.5 MB/s 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:21.115 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:21.116 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:21.116 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:21.374 1+0 records in 00:09:21.374 1+0 records out 00:09:21.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692804 s, 5.9 MB/s 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:21.374 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:21.632 1+0 records in 00:09:21.632 1+0 records out 00:09:21.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683386 s, 6.0 MB/s 00:09:21.632 13:09:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:21.632 13:09:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:21.632 13:09:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:21.632 13:09:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:21.632 13:09:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:21.632 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:21.632 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:09:21.632 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:21.890 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd0", 00:09:21.890 "bdev_name": "Malloc0" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd1", 00:09:21.890 "bdev_name": "Malloc1p0" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd2", 00:09:21.890 "bdev_name": "Malloc1p1" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd3", 00:09:21.890 "bdev_name": "Malloc2p0" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd4", 00:09:21.890 "bdev_name": "Malloc2p1" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd5", 00:09:21.890 "bdev_name": "Malloc2p2" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd6", 00:09:21.890 "bdev_name": "Malloc2p3" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd7", 00:09:21.890 "bdev_name": "Malloc2p4" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd8", 00:09:21.890 "bdev_name": "Malloc2p5" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd9", 00:09:21.890 "bdev_name": "Malloc2p6" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd10", 00:09:21.890 "bdev_name": "Malloc2p7" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd11", 00:09:21.890 "bdev_name": "TestPT" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd12", 00:09:21.890 "bdev_name": "raid0" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd13", 00:09:21.890 "bdev_name": "concat0" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd14", 00:09:21.890 "bdev_name": "raid1" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd15", 00:09:21.890 "bdev_name": "AIO0" 00:09:21.890 } 00:09:21.890 ]' 00:09:21.890 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:21.890 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd0", 00:09:21.890 "bdev_name": "Malloc0" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd1", 00:09:21.890 "bdev_name": "Malloc1p0" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd2", 00:09:21.890 "bdev_name": "Malloc1p1" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd3", 00:09:21.890 "bdev_name": "Malloc2p0" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd4", 00:09:21.890 "bdev_name": "Malloc2p1" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd5", 00:09:21.890 "bdev_name": "Malloc2p2" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd6", 00:09:21.890 "bdev_name": "Malloc2p3" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd7", 00:09:21.890 "bdev_name": "Malloc2p4" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd8", 00:09:21.890 "bdev_name": "Malloc2p5" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd9", 00:09:21.890 "bdev_name": "Malloc2p6" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd10", 00:09:21.890 "bdev_name": "Malloc2p7" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd11", 00:09:21.890 "bdev_name": "TestPT" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd12", 00:09:21.890 "bdev_name": "raid0" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd13", 00:09:21.890 "bdev_name": "concat0" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd14", 00:09:21.890 "bdev_name": "raid1" 00:09:21.890 }, 00:09:21.890 { 00:09:21.890 "nbd_device": "/dev/nbd15", 00:09:21.890 "bdev_name": "AIO0" 00:09:21.890 } 00:09:21.890 ]' 00:09:21.890 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:21.890 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:09:21.890 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.891 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:09:21.891 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:21.891 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:21.891 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:21.891 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:22.149 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:22.149 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:22.149 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:22.149 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.149 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.149 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:22.149 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:22.149 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.149 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.149 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:22.407 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:22.407 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:22.407 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:22.407 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.407 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.407 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:22.407 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:22.407 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.407 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.407 13:09:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:22.665 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:22.665 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:22.665 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:22.665 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.665 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.665 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:22.665 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:22.665 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.665 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.665 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:22.926 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:22.926 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:22.926 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:22.926 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:22.926 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:22.926 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:22.926 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:22.926 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:22.926 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.926 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:23.184 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:23.184 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:23.184 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:23.184 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.184 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.184 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:23.184 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:23.184 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.184 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.184 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:23.442 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:23.442 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:23.442 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:23.442 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.442 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.442 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:23.442 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:23.442 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.442 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.442 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:23.701 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:23.701 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:23.701 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:23.701 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.701 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.701 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:23.701 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:23.701 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.701 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.701 13:09:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:09:23.701 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:09:23.701 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:09:23.701 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:09:23.701 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.701 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.701 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:09:23.701 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:23.701 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.701 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.701 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:09:23.960 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:09:23.960 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:09:23.960 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:09:23.960 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.960 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.960 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:09:23.960 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:23.960 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.960 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.960 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:09:24.218 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:09:24.218 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:09:24.218 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:09:24.218 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.218 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.218 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:09:24.218 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.218 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.218 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.218 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:24.515 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:24.515 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:24.515 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:24.515 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.515 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.515 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:24.515 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.515 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.515 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.515 13:09:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:24.796 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:24.796 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:24.796 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:24.796 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.796 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.796 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:24.796 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.796 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.796 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.796 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:25.054 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:25.054 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:25.054 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:25.054 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.054 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.054 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:25.054 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:25.054 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.054 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.054 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:25.313 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:25.313 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:25.313 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:25.313 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.313 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.313 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:25.313 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:25.313 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.313 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.313 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:25.571 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:25.571 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:25.571 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:25.571 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.571 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.571 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:25.571 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:25.571 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.571 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.571 13:09:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:09:25.830 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:09:25.830 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:09:25.830 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:09:25.830 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:25.830 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:25.830 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:09:25.830 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:25.830 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:25.830 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:25.830 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.830 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:26.089 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:26.348 /dev/nbd0 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:26.348 1+0 records in 00:09:26.348 1+0 records out 00:09:26.348 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198828 s, 20.6 MB/s 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:26.348 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:09:26.606 /dev/nbd1 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:26.606 1+0 records in 00:09:26.606 1+0 records out 00:09:26.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259431 s, 15.8 MB/s 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:26.606 13:09:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:09:26.865 /dev/nbd10 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:26.865 1+0 records in 00:09:26.865 1+0 records out 00:09:26.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299183 s, 13.7 MB/s 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:26.865 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:09:27.123 /dev/nbd11 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.123 1+0 records in 00:09:27.123 1+0 records out 00:09:27.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302155 s, 13.6 MB/s 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:27.123 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.124 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:27.124 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:09:27.383 /dev/nbd12 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.383 1+0 records in 00:09:27.383 1+0 records out 00:09:27.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362745 s, 11.3 MB/s 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:27.383 13:09:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:09:27.641 /dev/nbd13 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.642 1+0 records in 00:09:27.642 1+0 records out 00:09:27.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000356386 s, 11.5 MB/s 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:27.642 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:09:27.900 /dev/nbd14 00:09:27.900 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:27.900 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:27.900 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:09:27.900 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:27.900 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.901 1+0 records in 00:09:27.901 1+0 records out 00:09:27.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406152 s, 10.1 MB/s 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:27.901 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:09:28.159 /dev/nbd15 00:09:28.159 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:09:28.159 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:09:28.159 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:09:28.159 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:28.159 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:28.159 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:28.159 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:09:28.159 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:28.159 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:28.159 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:28.159 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.159 1+0 records in 00:09:28.159 1+0 records out 00:09:28.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386913 s, 10.6 MB/s 00:09:28.160 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.160 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:28.160 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.160 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:28.160 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:28.160 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.160 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:28.160 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:09:28.418 /dev/nbd2 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.418 1+0 records in 00:09:28.418 1+0 records out 00:09:28.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360405 s, 11.4 MB/s 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:28.418 13:09:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:09:28.677 /dev/nbd3 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.677 1+0 records in 00:09:28.677 1+0 records out 00:09:28.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583491 s, 7.0 MB/s 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:28.677 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:09:28.935 /dev/nbd4 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.935 1+0 records in 00:09:28.935 1+0 records out 00:09:28.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555651 s, 7.4 MB/s 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:28.935 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:09:29.194 /dev/nbd5 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:29.194 1+0 records in 00:09:29.194 1+0 records out 00:09:29.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555341 s, 7.4 MB/s 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:29.194 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:09:29.452 /dev/nbd6 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:29.452 1+0 records in 00:09:29.452 1+0 records out 00:09:29.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771442 s, 5.3 MB/s 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:29.452 13:09:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:09:29.710 /dev/nbd7 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:29.710 1+0 records in 00:09:29.710 1+0 records out 00:09:29.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601593 s, 6.8 MB/s 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:29.710 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:09:29.968 /dev/nbd8 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:29.968 1+0 records in 00:09:29.968 1+0 records out 00:09:29.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638328 s, 6.4 MB/s 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:29.968 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:09:30.227 /dev/nbd9 00:09:30.227 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:09:30.227 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:09:30.227 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:09:30.227 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:09:30.227 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:30.227 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:30.227 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:09:30.227 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:09:30.227 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:30.227 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:30.227 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:30.227 1+0 records in 00:09:30.227 1+0 records out 00:09:30.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000821753 s, 5.0 MB/s 00:09:30.227 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:30.485 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:09:30.485 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:30.485 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:30.485 13:09:10 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:09:30.485 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.485 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:09:30.485 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:30.485 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.485 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:30.485 13:09:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd0", 00:09:30.485 "bdev_name": "Malloc0" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd1", 00:09:30.485 "bdev_name": "Malloc1p0" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd10", 00:09:30.485 "bdev_name": "Malloc1p1" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd11", 00:09:30.485 "bdev_name": "Malloc2p0" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd12", 00:09:30.485 "bdev_name": "Malloc2p1" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd13", 00:09:30.485 "bdev_name": "Malloc2p2" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd14", 00:09:30.485 "bdev_name": "Malloc2p3" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd15", 00:09:30.485 "bdev_name": "Malloc2p4" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd2", 00:09:30.485 "bdev_name": "Malloc2p5" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd3", 00:09:30.485 "bdev_name": "Malloc2p6" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd4", 00:09:30.485 "bdev_name": "Malloc2p7" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd5", 00:09:30.485 "bdev_name": "TestPT" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd6", 00:09:30.485 "bdev_name": "raid0" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd7", 00:09:30.485 "bdev_name": "concat0" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd8", 00:09:30.485 "bdev_name": "raid1" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd9", 00:09:30.485 "bdev_name": "AIO0" 00:09:30.485 } 00:09:30.485 ]' 00:09:30.485 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd0", 00:09:30.485 "bdev_name": "Malloc0" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd1", 00:09:30.485 "bdev_name": "Malloc1p0" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd10", 00:09:30.485 "bdev_name": "Malloc1p1" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd11", 00:09:30.485 "bdev_name": "Malloc2p0" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd12", 00:09:30.485 "bdev_name": "Malloc2p1" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd13", 00:09:30.485 "bdev_name": "Malloc2p2" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd14", 00:09:30.485 "bdev_name": "Malloc2p3" 00:09:30.485 }, 00:09:30.485 { 00:09:30.485 "nbd_device": "/dev/nbd15", 00:09:30.485 "bdev_name": "Malloc2p4" 00:09:30.485 }, 00:09:30.485 { 00:09:30.486 "nbd_device": "/dev/nbd2", 00:09:30.486 "bdev_name": "Malloc2p5" 00:09:30.486 }, 00:09:30.486 { 00:09:30.486 "nbd_device": "/dev/nbd3", 00:09:30.486 "bdev_name": "Malloc2p6" 00:09:30.486 }, 00:09:30.486 { 00:09:30.486 "nbd_device": "/dev/nbd4", 00:09:30.486 "bdev_name": "Malloc2p7" 00:09:30.486 }, 00:09:30.486 { 00:09:30.486 "nbd_device": "/dev/nbd5", 00:09:30.486 "bdev_name": "TestPT" 00:09:30.486 }, 00:09:30.486 { 00:09:30.486 "nbd_device": "/dev/nbd6", 00:09:30.486 "bdev_name": "raid0" 00:09:30.486 }, 00:09:30.486 { 00:09:30.486 "nbd_device": "/dev/nbd7", 00:09:30.486 "bdev_name": "concat0" 00:09:30.486 }, 00:09:30.486 { 00:09:30.486 "nbd_device": "/dev/nbd8", 00:09:30.486 "bdev_name": "raid1" 00:09:30.486 }, 00:09:30.486 { 00:09:30.486 "nbd_device": "/dev/nbd9", 00:09:30.486 "bdev_name": "AIO0" 00:09:30.486 } 00:09:30.486 ]' 00:09:30.486 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:30.745 /dev/nbd1 00:09:30.745 /dev/nbd10 00:09:30.745 /dev/nbd11 00:09:30.745 /dev/nbd12 00:09:30.745 /dev/nbd13 00:09:30.745 /dev/nbd14 00:09:30.745 /dev/nbd15 00:09:30.745 /dev/nbd2 00:09:30.745 /dev/nbd3 00:09:30.745 /dev/nbd4 00:09:30.745 /dev/nbd5 00:09:30.745 /dev/nbd6 00:09:30.745 /dev/nbd7 00:09:30.745 /dev/nbd8 00:09:30.745 /dev/nbd9' 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:30.745 /dev/nbd1 00:09:30.745 /dev/nbd10 00:09:30.745 /dev/nbd11 00:09:30.745 /dev/nbd12 00:09:30.745 /dev/nbd13 00:09:30.745 /dev/nbd14 00:09:30.745 /dev/nbd15 00:09:30.745 /dev/nbd2 00:09:30.745 /dev/nbd3 00:09:30.745 /dev/nbd4 00:09:30.745 /dev/nbd5 00:09:30.745 /dev/nbd6 00:09:30.745 /dev/nbd7 00:09:30.745 /dev/nbd8 00:09:30.745 /dev/nbd9' 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:30.745 256+0 records in 00:09:30.745 256+0 records out 00:09:30.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00464804 s, 226 MB/s 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:30.745 256+0 records in 00:09:30.745 256+0 records out 00:09:30.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166547 s, 6.3 MB/s 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:30.745 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:31.004 256+0 records in 00:09:31.004 256+0 records out 00:09:31.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158487 s, 6.6 MB/s 00:09:31.004 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.004 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:31.263 256+0 records in 00:09:31.263 256+0 records out 00:09:31.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168658 s, 6.2 MB/s 00:09:31.263 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.263 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:31.263 256+0 records in 00:09:31.263 256+0 records out 00:09:31.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167159 s, 6.3 MB/s 00:09:31.263 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.263 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:31.523 256+0 records in 00:09:31.523 256+0 records out 00:09:31.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165365 s, 6.3 MB/s 00:09:31.523 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.523 13:09:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:31.523 256+0 records in 00:09:31.523 256+0 records out 00:09:31.523 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.091862 s, 11.4 MB/s 00:09:31.523 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.523 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:31.898 256+0 records in 00:09:31.898 256+0 records out 00:09:31.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168501 s, 6.2 MB/s 00:09:31.898 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.898 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:09:31.898 256+0 records in 00:09:31.898 256+0 records out 00:09:31.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164764 s, 6.4 MB/s 00:09:31.898 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.898 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:09:32.157 256+0 records in 00:09:32.157 256+0 records out 00:09:32.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168103 s, 6.2 MB/s 00:09:32.157 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.157 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:09:32.157 256+0 records in 00:09:32.157 256+0 records out 00:09:32.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10114 s, 10.4 MB/s 00:09:32.157 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.157 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:09:32.416 256+0 records in 00:09:32.416 256+0 records out 00:09:32.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.096997 s, 10.8 MB/s 00:09:32.416 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.416 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:09:32.416 256+0 records in 00:09:32.416 256+0 records out 00:09:32.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113751 s, 9.2 MB/s 00:09:32.416 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.416 13:09:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:09:32.675 256+0 records in 00:09:32.675 256+0 records out 00:09:32.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16834 s, 6.2 MB/s 00:09:32.675 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.675 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:09:32.675 256+0 records in 00:09:32.675 256+0 records out 00:09:32.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.101909 s, 10.3 MB/s 00:09:32.675 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.675 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:09:32.933 256+0 records in 00:09:32.933 256+0 records out 00:09:32.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16592 s, 6.3 MB/s 00:09:32.933 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.933 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:09:32.933 256+0 records in 00:09:32.933 256+0 records out 00:09:32.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165291 s, 6.3 MB/s 00:09:32.933 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:09:32.933 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:32.933 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.933 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:32.933 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:09:32.933 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:32.933 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:32.933 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.933 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:33.192 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.192 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:33.192 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.192 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:33.192 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.192 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:33.192 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.192 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:33.192 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.192 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:33.192 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.192 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.193 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:33.452 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:33.452 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:33.452 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:33.452 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.452 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.452 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:33.452 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.452 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.452 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.452 13:09:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:33.712 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:33.712 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:33.712 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:33.712 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.712 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.712 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:33.712 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.712 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.712 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.712 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:33.971 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:33.971 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:33.971 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:33.971 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.971 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.971 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:33.971 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:33.971 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.971 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.971 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:34.230 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:34.230 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:34.230 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:34.230 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.230 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.230 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:34.230 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.230 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.230 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.230 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:34.490 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:34.490 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:34.490 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:34.490 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.490 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.490 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:34.490 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.490 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.490 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.490 13:09:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:34.749 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:34.749 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:34.749 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:34.749 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.749 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.749 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:34.749 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:34.749 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.749 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.749 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:35.008 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:35.008 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:35.008 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:35.008 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.008 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.008 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:35.008 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.008 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.008 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.008 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:09:35.267 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:09:35.267 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:09:35.267 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:09:35.267 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.267 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.267 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:09:35.267 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.267 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.267 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.267 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:35.526 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:35.526 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:35.526 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:35.526 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.526 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.526 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:35.526 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.526 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.526 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.526 13:09:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:35.784 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:35.784 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:35.784 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:35.784 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:35.784 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:35.784 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:35.784 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:35.784 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:35.784 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:35.784 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:35.784 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:36.043 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:36.043 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:36.043 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.043 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.043 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:36.043 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.043 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.043 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.043 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:36.043 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:36.043 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:36.043 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.302 13:09:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:09:36.562 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:09:36.562 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:09:36.562 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:09:36.562 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.562 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.562 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:09:36.562 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.562 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.562 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.562 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:09:36.821 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:09:36.821 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:09:36.821 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:09:36.821 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:36.821 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:36.821 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:09:36.821 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:36.821 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:36.821 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:36.821 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:09:37.088 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:09:37.088 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:09:37.088 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:09:37.088 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:37.088 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:37.088 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:09:37.088 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:37.088 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:37.088 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:37.088 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.088 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:37.386 13:09:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:37.645 malloc_lvol_verify 00:09:37.645 13:09:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:37.904 684b55ea-2707-45ad-bdb8-976343ba7c6f 00:09:37.904 13:09:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:38.163 89594b60-4316-47bc-a848-685ecb332efe 00:09:38.163 13:09:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:38.422 /dev/nbd0 00:09:38.422 13:09:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:38.422 mke2fs 1.46.5 (30-Dec-2021) 00:09:38.422 Discarding device blocks: 0/4096 done 00:09:38.422 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:38.422 00:09:38.422 Allocating group tables: 0/1 done 00:09:38.422 Writing inode tables: 0/1 done 00:09:38.422 Creating journal (1024 blocks): done 00:09:38.422 Writing superblocks and filesystem accounting information: 0/1 done 00:09:38.422 00:09:38.422 13:09:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:38.422 13:09:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:38.422 13:09:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:38.422 13:09:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:38.422 13:09:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:38.422 13:09:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:38.422 13:09:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:38.422 13:09:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 628857 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 628857 ']' 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 628857 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 628857 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 628857' 00:09:38.681 killing process with pid 628857 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@969 -- # kill 628857 00:09:38.681 13:09:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@974 -- # wait 628857 00:09:38.940 13:09:19 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:38.940 00:09:38.940 real 0m22.978s 00:09:38.940 user 0m28.370s 00:09:38.940 sys 0m13.249s 00:09:38.940 13:09:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.940 13:09:19 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:38.940 ************************************ 00:09:38.940 END TEST bdev_nbd 00:09:38.940 ************************************ 00:09:39.199 13:09:19 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:39.199 13:09:19 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:09:39.199 13:09:19 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:09:39.199 13:09:19 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:09:39.199 13:09:19 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:39.199 13:09:19 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.199 13:09:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:09:39.199 ************************************ 00:09:39.199 START TEST bdev_fio 00:09:39.199 ************************************ 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:39.199 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:09:39.199 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.200 13:09:19 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:09:39.200 ************************************ 00:09:39.200 START TEST bdev_fio_rw_verify 00:09:39.200 ************************************ 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:09:39.200 13:09:19 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:39.766 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:39.766 fio-3.35 00:09:39.766 Starting 16 threads 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:39.766 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.766 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:51.970 00:09:51.970 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=634324: Fri Jul 26 13:09:30 2024 00:09:51.970 read: IOPS=96.9k, BW=379MiB/s (397MB/s)(3787MiB/10001msec) 00:09:51.970 slat (usec): min=2, max=1292, avg=33.69, stdev=13.72 00:09:51.970 clat (usec): min=10, max=1726, avg=268.51, stdev=122.07 00:09:51.970 lat (usec): min=20, max=1749, avg=302.20, stdev=129.08 00:09:51.970 clat percentiles (usec): 00:09:51.970 | 50.000th=[ 265], 99.000th=[ 529], 99.900th=[ 652], 99.990th=[ 799], 00:09:51.970 | 99.999th=[ 1012] 00:09:51.970 write: IOPS=153k, BW=598MiB/s (627MB/s)(5895MiB/9862msec); 0 zone resets 00:09:51.970 slat (usec): min=7, max=3471, avg=45.45, stdev=13.81 00:09:51.970 clat (usec): min=12, max=3840, avg=314.67, stdev=140.37 00:09:51.970 lat (usec): min=35, max=3881, avg=360.11, stdev=146.81 00:09:51.970 clat percentiles (usec): 00:09:51.970 | 50.000th=[ 302], 99.000th=[ 693], 99.900th=[ 832], 99.990th=[ 988], 00:09:51.970 | 99.999th=[ 1385] 00:09:51.970 bw ( KiB/s): min=504753, max=762921, per=98.79%, avg=604705.42, stdev=4127.71, samples=304 00:09:51.970 iops : min=126188, max=190728, avg=151175.89, stdev=1031.90, samples=304 00:09:51.970 lat (usec) : 20=0.01%, 50=0.71%, 100=5.10%, 250=35.39%, 500=51.92% 00:09:51.970 lat (usec) : 750=6.52%, 1000=0.34% 00:09:51.970 lat (msec) : 2=0.01%, 4=0.01% 00:09:51.970 cpu : usr=99.24%, sys=0.38%, ctx=680, majf=0, minf=2465 00:09:51.970 IO depths : 1=12.4%, 2=24.8%, 4=50.2%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:51.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.970 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.970 issued rwts: total=969349,1509127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.970 latency : target=0, window=0, percentile=100.00%, depth=8 00:09:51.970 00:09:51.970 Run status group 0 (all jobs): 00:09:51.970 READ: bw=379MiB/s (397MB/s), 379MiB/s-379MiB/s (397MB/s-397MB/s), io=3787MiB (3970MB), run=10001-10001msec 00:09:51.970 WRITE: bw=598MiB/s (627MB/s), 598MiB/s-598MiB/s (627MB/s-627MB/s), io=5895MiB (6181MB), run=9862-9862msec 00:09:51.970 00:09:51.970 real 0m11.503s 00:09:51.970 user 2m52.435s 00:09:51.970 sys 0m1.287s 00:09:51.970 13:09:31 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.970 13:09:31 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:09:51.970 ************************************ 00:09:51.970 END TEST bdev_fio_rw_verify 00:09:51.970 ************************************ 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:09:51.970 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:09:51.972 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2c46e97c-57aa-4658-8a74-e88dbe1d5807"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2c46e97c-57aa-4658-8a74-e88dbe1d5807",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "5b06c5a7-0e33-5c30-bc74-dca713720c77"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5b06c5a7-0e33-5c30-bc74-dca713720c77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "e8a9e14b-51ed-5389-9f4f-a35090b56e0b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "e8a9e14b-51ed-5389-9f4f-a35090b56e0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "9d709408-1879-5b65-a057-a3583b8ba1c5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9d709408-1879-5b65-a057-a3583b8ba1c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "e19f1637-f937-5af5-a533-800b442131c7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e19f1637-f937-5af5-a533-800b442131c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f593c122-ae0b-54f7-a669-7e7cbe0197a5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f593c122-ae0b-54f7-a669-7e7cbe0197a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "7ceee860-167d-573d-89a7-aee7fa0988ad"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7ceee860-167d-573d-89a7-aee7fa0988ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "8949cbd7-c94c-5f11-8318-381f3ee383ab"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8949cbd7-c94c-5f11-8318-381f3ee383ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "dbe91e20-c295-5a7a-b664-0eb050d96a0a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dbe91e20-c295-5a7a-b664-0eb050d96a0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d43793d5-1d9d-5134-aaeb-e10da8aae7a8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d43793d5-1d9d-5134-aaeb-e10da8aae7a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "b36ac161-8c07-592f-bceb-acc9c48c3244"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b36ac161-8c07-592f-bceb-acc9c48c3244",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b75803da-aba6-53b7-bab4-f524a2039f81"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b75803da-aba6-53b7-bab4-f524a2039f81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "db10fa74-5f2c-464f-8ac7-2e48b84559ae"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "db10fa74-5f2c-464f-8ac7-2e48b84559ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "db10fa74-5f2c-464f-8ac7-2e48b84559ae",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "37fa96eb-e3c1-4aa8-93a3-88c25d02f483",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "7be745ab-a5b2-4498-96ed-da235dfe4c92",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "ce664658-b044-40e3-aff3-548a5e790e8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "ce664658-b044-40e3-aff3-548a5e790e8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ce664658-b044-40e3-aff3-548a5e790e8b",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "475ea17c-0754-49da-bb35-4e973cdbedbb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "911d2080-bd0a-408c-a6dc-2bed310c3ed0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "4ec05c3f-6930-4e0b-8221-f9f082d79464"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4ec05c3f-6930-4e0b-8221-f9f082d79464",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4ec05c3f-6930-4e0b-8221-f9f082d79464",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "58b6d072-4453-4743-933e-21d40f584816",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b519b972-c9f7-4466-a38e-831d1165821a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b6f7a565-a505-4fc4-ba0f-f19e391292d6"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b6f7a565-a505-4fc4-ba0f-f19e391292d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:09:51.972 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:09:51.972 Malloc1p0 00:09:51.972 Malloc1p1 00:09:51.972 Malloc2p0 00:09:51.972 Malloc2p1 00:09:51.972 Malloc2p2 00:09:51.972 Malloc2p3 00:09:51.972 Malloc2p4 00:09:51.972 Malloc2p5 00:09:51.972 Malloc2p6 00:09:51.972 Malloc2p7 00:09:51.972 TestPT 00:09:51.972 raid0 00:09:51.972 concat0 ]] 00:09:51.972 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "2c46e97c-57aa-4658-8a74-e88dbe1d5807"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "2c46e97c-57aa-4658-8a74-e88dbe1d5807",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "5b06c5a7-0e33-5c30-bc74-dca713720c77"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5b06c5a7-0e33-5c30-bc74-dca713720c77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "e8a9e14b-51ed-5389-9f4f-a35090b56e0b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "e8a9e14b-51ed-5389-9f4f-a35090b56e0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "9d709408-1879-5b65-a057-a3583b8ba1c5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9d709408-1879-5b65-a057-a3583b8ba1c5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "e19f1637-f937-5af5-a533-800b442131c7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e19f1637-f937-5af5-a533-800b442131c7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f593c122-ae0b-54f7-a669-7e7cbe0197a5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f593c122-ae0b-54f7-a669-7e7cbe0197a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "7ceee860-167d-573d-89a7-aee7fa0988ad"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7ceee860-167d-573d-89a7-aee7fa0988ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "8949cbd7-c94c-5f11-8318-381f3ee383ab"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8949cbd7-c94c-5f11-8318-381f3ee383ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "dbe91e20-c295-5a7a-b664-0eb050d96a0a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dbe91e20-c295-5a7a-b664-0eb050d96a0a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d43793d5-1d9d-5134-aaeb-e10da8aae7a8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d43793d5-1d9d-5134-aaeb-e10da8aae7a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "b36ac161-8c07-592f-bceb-acc9c48c3244"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b36ac161-8c07-592f-bceb-acc9c48c3244",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b75803da-aba6-53b7-bab4-f524a2039f81"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b75803da-aba6-53b7-bab4-f524a2039f81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "db10fa74-5f2c-464f-8ac7-2e48b84559ae"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "db10fa74-5f2c-464f-8ac7-2e48b84559ae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "db10fa74-5f2c-464f-8ac7-2e48b84559ae",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "37fa96eb-e3c1-4aa8-93a3-88c25d02f483",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "7be745ab-a5b2-4498-96ed-da235dfe4c92",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "ce664658-b044-40e3-aff3-548a5e790e8b"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "ce664658-b044-40e3-aff3-548a5e790e8b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ce664658-b044-40e3-aff3-548a5e790e8b",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "475ea17c-0754-49da-bb35-4e973cdbedbb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "911d2080-bd0a-408c-a6dc-2bed310c3ed0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "4ec05c3f-6930-4e0b-8221-f9f082d79464"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4ec05c3f-6930-4e0b-8221-f9f082d79464",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "4ec05c3f-6930-4e0b-8221-f9f082d79464",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "58b6d072-4453-4743-933e-21d40f584816",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b519b972-c9f7-4466-a38e-831d1165821a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b6f7a565-a505-4fc4-ba0f-f19e391292d6"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b6f7a565-a505-4fc4-ba0f-f19e391292d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:09:51.973 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.974 13:09:31 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:09:51.974 ************************************ 00:09:51.974 START TEST bdev_fio_trim 00:09:51.974 ************************************ 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:09:51.974 13:09:31 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:09:51.974 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:09:51.974 fio-3.35 00:09:51.974 Starting 14 threads 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.974 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:51.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.975 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:51.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.975 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:51.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.975 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:51.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.975 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:51.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.975 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:51.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.975 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:51.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.975 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:51.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.975 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:51.975 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:51.975 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:01.957 00:10:01.957 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=636351: Fri Jul 26 13:09:42 2024 00:10:01.957 write: IOPS=144k, BW=562MiB/s (589MB/s)(5622MiB/10001msec); 0 zone resets 00:10:01.957 slat (usec): min=3, max=3971, avg=34.57, stdev=10.30 00:10:01.957 clat (usec): min=9, max=4463, avg=240.85, stdev=85.21 00:10:01.957 lat (usec): min=15, max=4530, avg=275.43, stdev=89.10 00:10:01.957 clat percentiles (usec): 00:10:01.957 | 50.000th=[ 233], 99.000th=[ 445], 99.900th=[ 545], 99.990th=[ 676], 00:10:01.957 | 99.999th=[ 1205] 00:10:01.957 bw ( KiB/s): min=507008, max=757258, per=100.00%, avg=577630.79, stdev=5151.18, samples=266 00:10:01.957 iops : min=126752, max=189313, avg=144407.58, stdev=1287.79, samples=266 00:10:01.957 trim: IOPS=144k, BW=562MiB/s (589MB/s)(5622MiB/10001msec); 0 zone resets 00:10:01.957 slat (usec): min=4, max=396, avg=23.97, stdev= 6.86 00:10:01.957 clat (usec): min=5, max=4531, avg=274.79, stdev=89.81 00:10:01.957 lat (usec): min=16, max=4574, avg=298.76, stdev=92.76 00:10:01.957 clat percentiles (usec): 00:10:01.957 | 50.000th=[ 269], 99.000th=[ 490], 99.900th=[ 603], 99.990th=[ 742], 00:10:01.957 | 99.999th=[ 1045] 00:10:01.957 bw ( KiB/s): min=507008, max=757266, per=100.00%, avg=577630.79, stdev=5151.19, samples=266 00:10:01.957 iops : min=126752, max=189315, avg=144407.58, stdev=1287.79, samples=266 00:10:01.957 lat (usec) : 10=0.01%, 20=0.01%, 50=0.08%, 100=1.56%, 250=48.63% 00:10:01.957 lat (usec) : 500=49.19%, 750=0.53%, 1000=0.01% 00:10:01.957 lat (msec) : 2=0.01%, 10=0.01% 00:10:01.957 cpu : usr=99.62%, sys=0.00%, ctx=460, majf=0, minf=977 00:10:01.957 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.957 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.957 issued rwts: total=0,1439318,1439320,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.957 latency : target=0, window=0, percentile=100.00%, depth=8 00:10:01.957 00:10:01.957 Run status group 0 (all jobs): 00:10:01.957 WRITE: bw=562MiB/s (589MB/s), 562MiB/s-562MiB/s (589MB/s-589MB/s), io=5622MiB (5895MB), run=10001-10001msec 00:10:01.957 TRIM: bw=562MiB/s (589MB/s), 562MiB/s-562MiB/s (589MB/s-589MB/s), io=5622MiB (5895MB), run=10001-10001msec 00:10:02.215 00:10:02.215 real 0m11.410s 00:10:02.215 user 2m33.766s 00:10:02.215 sys 0m0.504s 00:10:02.215 13:09:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.215 13:09:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:10:02.215 ************************************ 00:10:02.215 END TEST bdev_fio_trim 00:10:02.215 ************************************ 00:10:02.215 13:09:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:10:02.475 13:09:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:10:02.475 13:09:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:10:02.475 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:10:02.475 13:09:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:10:02.475 00:10:02.475 real 0m23.241s 00:10:02.475 user 5m26.373s 00:10:02.475 sys 0m1.975s 00:10:02.475 13:09:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.475 13:09:42 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:10:02.475 ************************************ 00:10:02.475 END TEST bdev_fio 00:10:02.475 ************************************ 00:10:02.475 13:09:42 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:02.475 13:09:42 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:02.475 13:09:42 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:10:02.475 13:09:42 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.475 13:09:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:02.475 ************************************ 00:10:02.475 START TEST bdev_verify 00:10:02.475 ************************************ 00:10:02.475 13:09:42 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:02.475 [2024-07-26 13:09:42.886854] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:10:02.475 [2024-07-26 13:09:42.886909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid638231 ] 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:02.475 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:02.475 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:02.734 [2024-07-26 13:09:43.019526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:02.734 [2024-07-26 13:09:43.103987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.734 [2024-07-26 13:09:43.103993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.734 [2024-07-26 13:09:43.246513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:02.735 [2024-07-26 13:09:43.246567] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:02.735 [2024-07-26 13:09:43.246580] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:02.735 [2024-07-26 13:09:43.254521] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:02.735 [2024-07-26 13:09:43.254545] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:02.993 [2024-07-26 13:09:43.262533] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:02.993 [2024-07-26 13:09:43.262555] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:02.993 [2024-07-26 13:09:43.333603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:02.993 [2024-07-26 13:09:43.333649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.993 [2024-07-26 13:09:43.333664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2833450 00:10:02.993 [2024-07-26 13:09:43.333676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.993 [2024-07-26 13:09:43.334963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.993 [2024-07-26 13:09:43.334993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:03.252 Running I/O for 5 seconds... 00:10:08.529 00:10:08.529 Latency(us) 00:10:08.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.529 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x1000 00:10:08.529 Malloc0 : 5.17 1163.98 4.55 0.00 0.00 109743.00 494.80 261724.57 00:10:08.529 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x1000 length 0x1000 00:10:08.529 Malloc0 : 5.14 1145.49 4.47 0.00 0.00 111510.55 543.95 402653.18 00:10:08.529 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x800 00:10:08.529 Malloc1p0 : 5.17 594.15 2.32 0.00 0.00 214177.22 3342.34 246625.08 00:10:08.529 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x800 length 0x800 00:10:08.529 Malloc1p0 : 5.14 597.39 2.33 0.00 0.00 213051.66 3329.23 233203.30 00:10:08.529 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x800 00:10:08.529 Malloc1p1 : 5.17 593.92 2.32 0.00 0.00 213593.16 3460.30 239914.19 00:10:08.529 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x800 length 0x800 00:10:08.529 Malloc1p1 : 5.14 597.14 2.33 0.00 0.00 212462.71 3512.73 228170.14 00:10:08.529 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x200 00:10:08.529 Malloc2p0 : 5.17 593.70 2.32 0.00 0.00 213009.60 3329.23 234881.02 00:10:08.529 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x200 length 0x200 00:10:08.529 Malloc2p0 : 5.15 596.90 2.33 0.00 0.00 211885.50 3316.12 221459.25 00:10:08.529 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x200 00:10:08.529 Malloc2p1 : 5.18 593.47 2.32 0.00 0.00 212454.37 3355.44 229847.86 00:10:08.529 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x200 length 0x200 00:10:08.529 Malloc2p1 : 5.15 596.65 2.33 0.00 0.00 211325.79 3355.44 216426.09 00:10:08.529 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x200 00:10:08.529 Malloc2p2 : 5.18 593.25 2.32 0.00 0.00 211877.67 3316.12 224814.69 00:10:08.529 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x200 length 0x200 00:10:08.529 Malloc2p2 : 5.15 596.41 2.33 0.00 0.00 210743.59 3342.34 211392.92 00:10:08.529 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x200 00:10:08.529 Malloc2p3 : 5.18 593.02 2.32 0.00 0.00 211352.15 3276.80 221459.25 00:10:08.529 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x200 length 0x200 00:10:08.529 Malloc2p3 : 5.15 596.18 2.33 0.00 0.00 210206.21 3289.91 206359.76 00:10:08.529 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x200 00:10:08.529 Malloc2p4 : 5.18 592.79 2.32 0.00 0.00 210770.91 3381.66 214748.36 00:10:08.529 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x200 length 0x200 00:10:08.529 Malloc2p4 : 5.24 610.54 2.38 0.00 0.00 204697.72 3381.66 203004.31 00:10:08.529 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x200 00:10:08.529 Malloc2p5 : 5.18 592.57 2.31 0.00 0.00 210183.29 3355.44 210554.06 00:10:08.529 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x200 length 0x200 00:10:08.529 Malloc2p5 : 5.24 610.30 2.38 0.00 0.00 204153.38 3381.66 195454.57 00:10:08.529 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x200 00:10:08.529 Malloc2p6 : 5.25 609.10 2.38 0.00 0.00 203892.00 3486.52 204682.04 00:10:08.529 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x200 length 0x200 00:10:08.529 Malloc2p6 : 5.25 610.06 2.38 0.00 0.00 203591.07 3460.30 190421.40 00:10:08.529 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x200 00:10:08.529 Malloc2p7 : 5.26 608.52 2.38 0.00 0.00 203494.73 3303.01 199648.87 00:10:08.529 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x200 length 0x200 00:10:08.529 Malloc2p7 : 5.25 609.81 2.38 0.00 0.00 203061.52 3303.01 187904.82 00:10:08.529 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x1000 00:10:08.529 TestPT : 5.26 585.89 2.29 0.00 0.00 209847.40 19713.23 198810.01 00:10:08.529 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x1000 length 0x1000 00:10:08.529 TestPT : 5.27 586.16 2.29 0.00 0.00 209912.11 16567.50 265080.01 00:10:08.529 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x2000 00:10:08.529 raid0 : 5.26 607.91 2.37 0.00 0.00 202063.56 3158.84 171966.46 00:10:08.529 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x2000 length 0x2000 00:10:08.529 raid0 : 5.25 609.39 2.38 0.00 0.00 201606.19 3171.94 156866.97 00:10:08.529 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x2000 00:10:08.529 concat0 : 5.27 607.69 2.37 0.00 0.00 201529.00 3185.05 166933.30 00:10:08.529 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x2000 length 0x2000 00:10:08.529 concat0 : 5.25 609.10 2.38 0.00 0.00 201098.01 3185.05 158544.69 00:10:08.529 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x1000 00:10:08.529 raid1 : 5.27 607.37 2.37 0.00 0.00 201069.26 3670.02 164416.72 00:10:08.529 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x1000 length 0x1000 00:10:08.529 raid1 : 5.26 608.51 2.38 0.00 0.00 200718.48 3748.66 163577.86 00:10:08.529 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x0 length 0x4e2 00:10:08.529 AIO0 : 5.27 607.21 2.37 0.00 0.00 200511.74 1382.81 167772.16 00:10:08.529 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.529 Verification LBA range: start 0x4e2 length 0x4e2 00:10:08.529 AIO0 : 5.27 631.37 2.47 0.00 0.00 192904.92 1382.81 171127.60 00:10:08.529 =================================================================================================================== 00:10:08.529 Total : 20355.93 79.52 0.00 0.00 196107.95 494.80 402653.18 00:10:08.822 00:10:08.822 real 0m6.396s 00:10:08.822 user 0m11.943s 00:10:08.822 sys 0m0.351s 00:10:08.823 13:09:49 blockdev_general.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.823 13:09:49 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:08.823 ************************************ 00:10:08.823 END TEST bdev_verify 00:10:08.823 ************************************ 00:10:08.823 13:09:49 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:08.823 13:09:49 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:10:08.823 13:09:49 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.823 13:09:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:08.823 ************************************ 00:10:08.823 START TEST bdev_verify_big_io 00:10:08.823 ************************************ 00:10:08.823 13:09:49 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:09.081 [2024-07-26 13:09:49.361777] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:10:09.082 [2024-07-26 13:09:49.361819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639311 ] 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:09.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:09.082 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:09.082 [2024-07-26 13:09:49.481231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:09.082 [2024-07-26 13:09:49.570165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.082 [2024-07-26 13:09:49.570172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.342 [2024-07-26 13:09:49.714792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:09.342 [2024-07-26 13:09:49.714840] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:09.342 [2024-07-26 13:09:49.714853] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:09.342 [2024-07-26 13:09:49.722801] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:09.342 [2024-07-26 13:09:49.722826] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:09.342 [2024-07-26 13:09:49.730815] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:09.342 [2024-07-26 13:09:49.730838] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:09.342 [2024-07-26 13:09:49.802142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:09.342 [2024-07-26 13:09:49.802192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.342 [2024-07-26 13:09:49.802207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2142450 00:10:09.342 [2024-07-26 13:09:49.802219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.342 [2024-07-26 13:09:49.803515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.342 [2024-07-26 13:09:49.803544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:09.601 [2024-07-26 13:09:49.963003] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:10:09.601 [2024-07-26 13:09:49.963937] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:10:09.601 [2024-07-26 13:09:49.965510] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:10:09.601 [2024-07-26 13:09:49.966589] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:10:09.601 [2024-07-26 13:09:49.968274] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:10:09.601 [2024-07-26 13:09:49.969406] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:10:09.601 [2024-07-26 13:09:49.971085] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:10:09.601 [2024-07-26 13:09:49.972801] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:10:09.602 [2024-07-26 13:09:49.973938] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:10:09.602 [2024-07-26 13:09:49.975478] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:10:09.602 [2024-07-26 13:09:49.976339] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:10:09.602 [2024-07-26 13:09:49.977665] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:10:09.602 [2024-07-26 13:09:49.978526] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:10:09.602 [2024-07-26 13:09:49.979861] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:10:09.602 [2024-07-26 13:09:49.980735] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:10:09.602 [2024-07-26 13:09:49.982058] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:10:09.602 [2024-07-26 13:09:50.002854] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:10:09.602 [2024-07-26 13:09:50.004605] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:10:09.602 Running I/O for 5 seconds... 00:10:17.722 00:10:17.722 Latency(us) 00:10:17.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.722 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x100 00:10:17.722 Malloc0 : 5.95 172.00 10.75 0.00 0.00 730458.86 829.03 2120640.10 00:10:17.722 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x100 length 0x100 00:10:17.722 Malloc0 : 6.09 168.02 10.50 0.00 0.00 748288.14 799.54 2147483.65 00:10:17.722 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x80 00:10:17.722 Malloc1p0 : 6.26 61.31 3.83 0.00 0.00 1948122.83 2673.87 3046742.43 00:10:17.722 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x80 length 0x80 00:10:17.722 Malloc1p0 : 6.25 76.80 4.80 0.00 0.00 1566637.44 3198.16 2563558.60 00:10:17.722 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x80 00:10:17.722 Malloc1p1 : 6.64 38.55 2.41 0.00 0.00 2917724.20 1389.36 5100273.66 00:10:17.722 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x80 length 0x80 00:10:17.722 Malloc1p1 : 6.64 38.58 2.41 0.00 0.00 2936549.49 1395.92 5261334.94 00:10:17.722 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x20 00:10:17.722 Malloc2p0 : 6.13 26.10 1.63 0.00 0.00 1088735.57 596.38 1785095.78 00:10:17.722 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x20 length 0x20 00:10:17.722 Malloc2p0 : 6.17 25.93 1.62 0.00 0.00 1104845.08 586.55 1825361.10 00:10:17.722 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x20 00:10:17.722 Malloc2p1 : 6.13 26.09 1.63 0.00 0.00 1079888.31 573.44 1758252.24 00:10:17.722 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x20 length 0x20 00:10:17.722 Malloc2p1 : 6.17 25.92 1.62 0.00 0.00 1095105.55 589.82 1798517.56 00:10:17.722 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x20 00:10:17.722 Malloc2p2 : 6.13 26.09 1.63 0.00 0.00 1070711.00 576.72 1731408.69 00:10:17.722 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x20 length 0x20 00:10:17.722 Malloc2p2 : 6.17 25.92 1.62 0.00 0.00 1086212.00 593.10 1771674.01 00:10:17.722 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x20 00:10:17.722 Malloc2p3 : 6.13 26.08 1.63 0.00 0.00 1061997.07 586.55 1704565.15 00:10:17.722 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x20 length 0x20 00:10:17.722 Malloc2p3 : 6.17 25.91 1.62 0.00 0.00 1077012.41 596.38 1744830.46 00:10:17.722 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x20 00:10:17.722 Malloc2p4 : 6.20 28.39 1.77 0.00 0.00 978614.39 583.27 1677721.60 00:10:17.722 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x20 length 0x20 00:10:17.722 Malloc2p4 : 6.18 25.91 1.62 0.00 0.00 1068044.13 596.38 1717986.92 00:10:17.722 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x20 00:10:17.722 Malloc2p5 : 6.20 28.38 1.77 0.00 0.00 970411.78 576.72 1657588.94 00:10:17.722 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x20 length 0x20 00:10:17.722 Malloc2p5 : 6.18 25.90 1.62 0.00 0.00 1059435.63 593.10 1697854.26 00:10:17.722 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x20 00:10:17.722 Malloc2p6 : 6.20 28.38 1.77 0.00 0.00 962071.93 599.65 1630745.40 00:10:17.722 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x20 length 0x20 00:10:17.722 Malloc2p6 : 6.18 25.90 1.62 0.00 0.00 1050373.66 596.38 1671010.71 00:10:17.722 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x20 00:10:17.722 Malloc2p7 : 6.20 28.37 1.77 0.00 0.00 953881.94 573.44 1603901.85 00:10:17.722 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x20 length 0x20 00:10:17.722 Malloc2p7 : 6.18 25.89 1.62 0.00 0.00 1041158.01 586.55 1644167.17 00:10:17.722 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x100 00:10:17.722 TestPT : 6.69 38.58 2.41 0.00 0.00 2666380.83 68367.16 4026531.84 00:10:17.722 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x100 length 0x100 00:10:17.722 TestPT : 6.64 36.16 2.26 0.00 0.00 2833725.96 102341.02 4053375.39 00:10:17.722 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x200 00:10:17.722 raid0 : 6.76 42.60 2.66 0.00 0.00 2340704.49 1448.35 4563402.75 00:10:17.722 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x200 length 0x200 00:10:17.722 raid0 : 6.54 44.06 2.75 0.00 0.00 2287257.22 1454.90 4697620.48 00:10:17.722 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x200 00:10:17.722 concat0 : 6.64 52.98 3.31 0.00 0.00 1866714.90 1415.58 4402341.48 00:10:17.722 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x200 length 0x200 00:10:17.722 concat0 : 6.64 54.38 3.40 0.00 0.00 1816236.65 1474.56 4536559.21 00:10:17.722 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x100 00:10:17.722 raid1 : 6.69 57.39 3.59 0.00 0.00 1691259.84 1887.44 4241280.20 00:10:17.722 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x100 length 0x100 00:10:17.722 raid1 : 6.69 57.42 3.59 0.00 0.00 1685127.00 1848.12 4375497.93 00:10:17.722 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x0 length 0x4e 00:10:17.722 AIO0 : 6.76 72.16 4.51 0.00 0.00 801161.42 576.72 2496449.74 00:10:17.722 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:10:17.722 Verification LBA range: start 0x4e length 0x4e 00:10:17.722 AIO0 : 6.77 68.09 4.26 0.00 0.00 846605.36 779.88 2603823.92 00:10:17.722 =================================================================================================================== 00:10:17.722 Total : 1504.24 94.01 0.00 0.00 1397194.15 573.44 5261334.94 00:10:17.722 00:10:17.722 real 0m7.933s 00:10:17.722 user 0m14.967s 00:10:17.722 sys 0m0.395s 00:10:17.722 13:09:57 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.722 13:09:57 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:17.722 ************************************ 00:10:17.722 END TEST bdev_verify_big_io 00:10:17.722 ************************************ 00:10:17.722 13:09:57 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:17.722 13:09:57 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:17.722 13:09:57 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.722 13:09:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:17.722 ************************************ 00:10:17.722 START TEST bdev_write_zeroes 00:10:17.722 ************************************ 00:10:17.722 13:09:57 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:17.722 [2024-07-26 13:09:57.379743] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:10:17.722 [2024-07-26 13:09:57.379798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid640650 ] 00:10:17.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.722 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:17.723 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:17.723 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:17.723 [2024-07-26 13:09:57.512662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.723 [2024-07-26 13:09:57.595641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.723 [2024-07-26 13:09:57.747387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:17.723 [2024-07-26 13:09:57.747441] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:17.723 [2024-07-26 13:09:57.747455] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:17.723 [2024-07-26 13:09:57.755397] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:17.723 [2024-07-26 13:09:57.755423] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:17.723 [2024-07-26 13:09:57.763407] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:17.723 [2024-07-26 13:09:57.763430] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:17.723 [2024-07-26 13:09:57.834764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:17.723 [2024-07-26 13:09:57.834809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.723 [2024-07-26 13:09:57.834824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17a43d0 00:10:17.723 [2024-07-26 13:09:57.834835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.723 [2024-07-26 13:09:57.836274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.723 [2024-07-26 13:09:57.836302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:17.723 Running I/O for 1 seconds... 00:10:18.661 00:10:18.661 Latency(us) 00:10:18.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.661 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 Malloc0 : 1.05 5383.00 21.03 0.00 0.00 23765.91 616.04 39636.17 00:10:18.661 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 Malloc1p0 : 1.05 5375.92 21.00 0.00 0.00 23763.40 838.86 39007.03 00:10:18.661 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 Malloc1p1 : 1.05 5368.84 20.97 0.00 0.00 23746.38 832.31 38168.17 00:10:18.661 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 Malloc2p0 : 1.05 5361.77 20.94 0.00 0.00 23730.14 835.58 37329.31 00:10:18.661 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 Malloc2p1 : 1.05 5354.76 20.92 0.00 0.00 23714.27 832.31 36490.44 00:10:18.661 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 Malloc2p2 : 1.05 5347.80 20.89 0.00 0.00 23695.27 851.97 35651.58 00:10:18.661 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 Malloc2p3 : 1.05 5340.80 20.86 0.00 0.00 23676.78 825.75 34812.72 00:10:18.661 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 Malloc2p4 : 1.06 5333.86 20.84 0.00 0.00 23658.04 829.03 33973.86 00:10:18.661 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 Malloc2p5 : 1.06 5326.96 20.81 0.00 0.00 23639.34 825.75 33135.00 00:10:18.661 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 Malloc2p6 : 1.06 5320.02 20.78 0.00 0.00 23621.52 825.75 32296.14 00:10:18.661 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 Malloc2p7 : 1.06 5313.12 20.75 0.00 0.00 23603.72 829.03 31457.28 00:10:18.661 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 TestPT : 1.06 5306.28 20.73 0.00 0.00 23584.93 865.08 30618.42 00:10:18.661 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 raid0 : 1.06 5298.36 20.70 0.00 0.00 23560.43 1487.67 29150.41 00:10:18.661 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 concat0 : 1.06 5290.65 20.67 0.00 0.00 23516.69 1474.56 27682.41 00:10:18.661 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 raid1 : 1.07 5280.96 20.63 0.00 0.00 23466.64 2372.40 25165.82 00:10:18.661 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:18.661 AIO0 : 1.07 5275.00 20.61 0.00 0.00 23388.73 1035.47 24222.11 00:10:18.661 =================================================================================================================== 00:10:18.661 Total : 85278.10 333.12 0.00 0.00 23633.26 616.04 39636.17 00:10:19.230 00:10:19.230 real 0m2.134s 00:10:19.230 user 0m1.726s 00:10:19.230 sys 0m0.334s 00:10:19.230 13:09:59 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.230 13:09:59 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:19.230 ************************************ 00:10:19.230 END TEST bdev_write_zeroes 00:10:19.230 ************************************ 00:10:19.230 13:09:59 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:19.231 13:09:59 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:19.231 13:09:59 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.231 13:09:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:19.231 ************************************ 00:10:19.231 START TEST bdev_json_nonenclosed 00:10:19.231 ************************************ 00:10:19.231 13:09:59 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:19.231 [2024-07-26 13:09:59.594982] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:10:19.231 [2024-07-26 13:09:59.595035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid641189 ] 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:19.231 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.231 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:19.231 [2024-07-26 13:09:59.726300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.491 [2024-07-26 13:09:59.809826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.491 [2024-07-26 13:09:59.809887] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:19.491 [2024-07-26 13:09:59.809902] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:19.491 [2024-07-26 13:09:59.809913] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:19.491 00:10:19.491 real 0m0.358s 00:10:19.491 user 0m0.213s 00:10:19.491 sys 0m0.143s 00:10:19.491 13:09:59 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.491 13:09:59 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:19.491 ************************************ 00:10:19.491 END TEST bdev_json_nonenclosed 00:10:19.491 ************************************ 00:10:19.491 13:09:59 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:19.491 13:09:59 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:19.491 13:09:59 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.491 13:09:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:19.491 ************************************ 00:10:19.491 START TEST bdev_json_nonarray 00:10:19.491 ************************************ 00:10:19.491 13:09:59 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:19.751 [2024-07-26 13:10:00.032402] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:10:19.751 [2024-07-26 13:10:00.032458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid641210 ] 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:19.751 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:19.751 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:19.751 [2024-07-26 13:10:00.165529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.751 [2024-07-26 13:10:00.248663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.751 [2024-07-26 13:10:00.248732] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:19.751 [2024-07-26 13:10:00.248748] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:19.751 [2024-07-26 13:10:00.248759] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:20.021 00:10:20.021 real 0m0.357s 00:10:20.021 user 0m0.227s 00:10:20.021 sys 0m0.128s 00:10:20.021 13:10:00 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.021 13:10:00 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:20.021 ************************************ 00:10:20.021 END TEST bdev_json_nonarray 00:10:20.021 ************************************ 00:10:20.022 13:10:00 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:10:20.022 13:10:00 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:10:20.022 13:10:00 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:20.022 13:10:00 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.022 13:10:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:20.022 ************************************ 00:10:20.022 START TEST bdev_qos 00:10:20.022 ************************************ 00:10:20.022 13:10:00 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # qos_test_suite '' 00:10:20.022 13:10:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=641238 00:10:20.022 13:10:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 641238' 00:10:20.022 Process qos testing pid: 641238 00:10:20.022 13:10:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:10:20.022 13:10:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 641238 00:10:20.022 13:10:00 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # '[' -z 641238 ']' 00:10:20.022 13:10:00 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.022 13:10:00 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.022 13:10:00 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.022 13:10:00 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.022 13:10:00 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:20.022 13:10:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:10:20.022 [2024-07-26 13:10:00.469557] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:10:20.022 [2024-07-26 13:10:00.469613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid641238 ] 00:10:20.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.022 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:20.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.022 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:20.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.022 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:20.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.022 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:20.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.023 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:20.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.024 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:20.286 [2024-07-26 13:10:00.590022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.286 [2024-07-26 13:10:00.676101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.854 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.854 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # return 0 00:10:20.854 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:10:20.854 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.854 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:21.114 Malloc_0 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_0 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:21.114 [ 00:10:21.114 { 00:10:21.114 "name": "Malloc_0", 00:10:21.114 "aliases": [ 00:10:21.114 "34c8870d-c802-47e9-83b9-b931f1ac897a" 00:10:21.114 ], 00:10:21.114 "product_name": "Malloc disk", 00:10:21.114 "block_size": 512, 00:10:21.114 "num_blocks": 262144, 00:10:21.114 "uuid": "34c8870d-c802-47e9-83b9-b931f1ac897a", 00:10:21.114 "assigned_rate_limits": { 00:10:21.114 "rw_ios_per_sec": 0, 00:10:21.114 "rw_mbytes_per_sec": 0, 00:10:21.114 "r_mbytes_per_sec": 0, 00:10:21.114 "w_mbytes_per_sec": 0 00:10:21.114 }, 00:10:21.114 "claimed": false, 00:10:21.114 "zoned": false, 00:10:21.114 "supported_io_types": { 00:10:21.114 "read": true, 00:10:21.114 "write": true, 00:10:21.114 "unmap": true, 00:10:21.114 "flush": true, 00:10:21.114 "reset": true, 00:10:21.114 "nvme_admin": false, 00:10:21.114 "nvme_io": false, 00:10:21.114 "nvme_io_md": false, 00:10:21.114 "write_zeroes": true, 00:10:21.114 "zcopy": true, 00:10:21.114 "get_zone_info": false, 00:10:21.114 "zone_management": false, 00:10:21.114 "zone_append": false, 00:10:21.114 "compare": false, 00:10:21.114 "compare_and_write": false, 00:10:21.114 "abort": true, 00:10:21.114 "seek_hole": false, 00:10:21.114 "seek_data": false, 00:10:21.114 "copy": true, 00:10:21.114 "nvme_iov_md": false 00:10:21.114 }, 00:10:21.114 "memory_domains": [ 00:10:21.114 { 00:10:21.114 "dma_device_id": "system", 00:10:21.114 "dma_device_type": 1 00:10:21.114 }, 00:10:21.114 { 00:10:21.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.114 "dma_device_type": 2 00:10:21.114 } 00:10:21.114 ], 00:10:21.114 "driver_specific": {} 00:10:21.114 } 00:10:21.114 ] 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:21.114 Null_1 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Null_1 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:21.114 [ 00:10:21.114 { 00:10:21.114 "name": "Null_1", 00:10:21.114 "aliases": [ 00:10:21.114 "bdabdc40-4648-46f9-8a58-b23cb5c17f84" 00:10:21.114 ], 00:10:21.114 "product_name": "Null disk", 00:10:21.114 "block_size": 512, 00:10:21.114 "num_blocks": 262144, 00:10:21.114 "uuid": "bdabdc40-4648-46f9-8a58-b23cb5c17f84", 00:10:21.114 "assigned_rate_limits": { 00:10:21.114 "rw_ios_per_sec": 0, 00:10:21.114 "rw_mbytes_per_sec": 0, 00:10:21.114 "r_mbytes_per_sec": 0, 00:10:21.114 "w_mbytes_per_sec": 0 00:10:21.114 }, 00:10:21.114 "claimed": false, 00:10:21.114 "zoned": false, 00:10:21.114 "supported_io_types": { 00:10:21.114 "read": true, 00:10:21.114 "write": true, 00:10:21.114 "unmap": false, 00:10:21.114 "flush": false, 00:10:21.114 "reset": true, 00:10:21.114 "nvme_admin": false, 00:10:21.114 "nvme_io": false, 00:10:21.114 "nvme_io_md": false, 00:10:21.114 "write_zeroes": true, 00:10:21.114 "zcopy": false, 00:10:21.114 "get_zone_info": false, 00:10:21.114 "zone_management": false, 00:10:21.114 "zone_append": false, 00:10:21.114 "compare": false, 00:10:21.114 "compare_and_write": false, 00:10:21.114 "abort": true, 00:10:21.114 "seek_hole": false, 00:10:21.114 "seek_data": false, 00:10:21.114 "copy": false, 00:10:21.114 "nvme_iov_md": false 00:10:21.114 }, 00:10:21.114 "driver_specific": {} 00:10:21.114 } 00:10:21.114 ] 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:10:21.114 13:10:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:10:21.114 Running I/O for 60 seconds... 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 68437.96 273751.82 0.00 0.00 276480.00 0.00 0.00 ' 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=68437.96 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 68437 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=68437 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=17000 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 17000 -gt 1000 ']' 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 17000 Malloc_0 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 17000 IOPS Malloc_0 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.387 13:10:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:26.387 ************************************ 00:10:26.387 START TEST bdev_qos_iops 00:10:26.387 ************************************ 00:10:26.387 13:10:06 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # run_qos_test 17000 IOPS Malloc_0 00:10:26.387 13:10:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=17000 00:10:26.387 13:10:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:10:26.387 13:10:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:10:26.387 13:10:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:10:26.387 13:10:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:10:26.387 13:10:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:10:26.387 13:10:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:26.387 13:10:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:10:26.387 13:10:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 16997.90 67991.60 0.00 0.00 68816.00 0.00 0.00 ' 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=16997.90 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 16997 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=16997 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=15300 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=18700 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 16997 -lt 15300 ']' 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 16997 -gt 18700 ']' 00:10:31.661 00:10:31.661 real 0m5.223s 00:10:31.661 user 0m0.111s 00:10:31.661 sys 0m0.041s 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.661 13:10:11 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:10:31.661 ************************************ 00:10:31.661 END TEST bdev_qos_iops 00:10:31.661 ************************************ 00:10:31.661 13:10:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:10:31.661 13:10:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:10:31.661 13:10:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:10:31.661 13:10:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:10:31.661 13:10:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:31.661 13:10:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:10:31.661 13:10:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 21270.67 85082.68 0.00 0.00 86016.00 0.00 0.00 ' 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=86016.00 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 86016 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=86016 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=8 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 8 -lt 2 ']' 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 8 Null_1 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 8 BANDWIDTH Null_1 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.978 13:10:17 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:36.978 ************************************ 00:10:36.978 START TEST bdev_qos_bw 00:10:36.978 ************************************ 00:10:36.978 13:10:17 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # run_qos_test 8 BANDWIDTH Null_1 00:10:36.978 13:10:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=8 00:10:36.978 13:10:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:10:36.978 13:10:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:10:36.978 13:10:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:10:36.978 13:10:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:10:36.978 13:10:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:10:36.978 13:10:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:36.978 13:10:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:10:36.978 13:10:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 2050.18 8200.70 0.00 0.00 8364.00 0.00 0.00 ' 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=8364.00 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 8364 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=8364 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=8192 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=7372 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=9011 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8364 -lt 7372 ']' 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8364 -gt 9011 ']' 00:10:42.252 00:10:42.252 real 0m5.254s 00:10:42.252 user 0m0.109s 00:10:42.252 sys 0m0.043s 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:10:42.252 ************************************ 00:10:42.252 END TEST bdev_qos_bw 00:10:42.252 ************************************ 00:10:42.252 13:10:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:10:42.252 13:10:22 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.252 13:10:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:42.252 13:10:22 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.252 13:10:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:10:42.252 13:10:22 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:42.252 13:10:22 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.252 13:10:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:42.252 ************************************ 00:10:42.252 START TEST bdev_qos_ro_bw 00:10:42.252 ************************************ 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:10:42.252 13:10:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 511.97 2047.89 0.00 0.00 2060.00 0.00 0.00 ' 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2060.00 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2060 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2060 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2060 -lt 1843 ']' 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2060 -gt 2252 ']' 00:10:47.524 00:10:47.524 real 0m5.176s 00:10:47.524 user 0m0.101s 00:10:47.524 sys 0m0.048s 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.524 13:10:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:10:47.524 ************************************ 00:10:47.524 END TEST bdev_qos_ro_bw 00:10:47.524 ************************************ 00:10:47.524 13:10:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:10:47.524 13:10:27 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.524 13:10:27 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:48.093 00:10:48.093 Latency(us) 00:10:48.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.093 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:48.093 Malloc_0 : 26.70 23245.38 90.80 0.00 0.00 10904.92 1848.12 503316.48 00:10:48.093 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:48.093 Null_1 : 26.85 22152.53 86.53 0.00 0.00 11530.41 720.90 149317.22 00:10:48.093 =================================================================================================================== 00:10:48.093 Total : 45397.91 177.34 0.00 0.00 11210.99 720.90 503316.48 00:10:48.093 0 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 641238 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # '[' -z 641238 ']' 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # kill -0 641238 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # uname 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 641238 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 641238' 00:10:48.093 killing process with pid 641238 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@969 -- # kill 641238 00:10:48.093 Received shutdown signal, test time was about 26.916712 seconds 00:10:48.093 00:10:48.093 Latency(us) 00:10:48.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:48.093 =================================================================================================================== 00:10:48.093 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:48.093 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@974 -- # wait 641238 00:10:48.353 13:10:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:10:48.353 00:10:48.353 real 0m28.285s 00:10:48.353 user 0m28.929s 00:10:48.353 sys 0m0.804s 00:10:48.353 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.353 13:10:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:10:48.353 ************************************ 00:10:48.353 END TEST bdev_qos 00:10:48.353 ************************************ 00:10:48.353 13:10:28 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:10:48.353 13:10:28 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.353 13:10:28 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.353 13:10:28 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:48.353 ************************************ 00:10:48.353 START TEST bdev_qd_sampling 00:10:48.353 ************************************ 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # qd_sampling_test_suite '' 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=646287 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 646287' 00:10:48.353 Process bdev QD sampling period testing pid: 646287 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 646287 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # '[' -z 646287 ']' 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.353 13:10:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:48.353 [2024-07-26 13:10:28.838255] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:10:48.353 [2024-07-26 13:10:28.838298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646287 ] 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:48.613 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.613 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:48.613 [2024-07-26 13:10:28.955081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.613 [2024-07-26 13:10:29.041038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.613 [2024-07-26 13:10:29.041044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # return 0 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:49.552 Malloc_QD 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_QD 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # local i 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.552 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:49.552 [ 00:10:49.552 { 00:10:49.552 "name": "Malloc_QD", 00:10:49.552 "aliases": [ 00:10:49.552 "30e19f0d-6ca2-48a4-a677-5cbf9efe351f" 00:10:49.552 ], 00:10:49.552 "product_name": "Malloc disk", 00:10:49.552 "block_size": 512, 00:10:49.552 "num_blocks": 262144, 00:10:49.552 "uuid": "30e19f0d-6ca2-48a4-a677-5cbf9efe351f", 00:10:49.552 "assigned_rate_limits": { 00:10:49.552 "rw_ios_per_sec": 0, 00:10:49.552 "rw_mbytes_per_sec": 0, 00:10:49.552 "r_mbytes_per_sec": 0, 00:10:49.552 "w_mbytes_per_sec": 0 00:10:49.552 }, 00:10:49.552 "claimed": false, 00:10:49.552 "zoned": false, 00:10:49.552 "supported_io_types": { 00:10:49.552 "read": true, 00:10:49.552 "write": true, 00:10:49.552 "unmap": true, 00:10:49.552 "flush": true, 00:10:49.552 "reset": true, 00:10:49.552 "nvme_admin": false, 00:10:49.552 "nvme_io": false, 00:10:49.552 "nvme_io_md": false, 00:10:49.552 "write_zeroes": true, 00:10:49.552 "zcopy": true, 00:10:49.552 "get_zone_info": false, 00:10:49.552 "zone_management": false, 00:10:49.552 "zone_append": false, 00:10:49.552 "compare": false, 00:10:49.552 "compare_and_write": false, 00:10:49.552 "abort": true, 00:10:49.552 "seek_hole": false, 00:10:49.552 "seek_data": false, 00:10:49.553 "copy": true, 00:10:49.553 "nvme_iov_md": false 00:10:49.553 }, 00:10:49.553 "memory_domains": [ 00:10:49.553 { 00:10:49.553 "dma_device_id": "system", 00:10:49.553 "dma_device_type": 1 00:10:49.553 }, 00:10:49.553 { 00:10:49.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.553 "dma_device_type": 2 00:10:49.553 } 00:10:49.553 ], 00:10:49.553 "driver_specific": {} 00:10:49.553 } 00:10:49.553 ] 00:10:49.553 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.553 13:10:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@907 -- # return 0 00:10:49.553 13:10:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:10:49.553 13:10:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:49.553 Running I/O for 5 seconds... 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:10:51.461 "tick_rate": 2500000000, 00:10:51.461 "ticks": 14479384190360340, 00:10:51.461 "bdevs": [ 00:10:51.461 { 00:10:51.461 "name": "Malloc_QD", 00:10:51.461 "bytes_read": 800109056, 00:10:51.461 "num_read_ops": 195332, 00:10:51.461 "bytes_written": 0, 00:10:51.461 "num_write_ops": 0, 00:10:51.461 "bytes_unmapped": 0, 00:10:51.461 "num_unmap_ops": 0, 00:10:51.461 "bytes_copied": 0, 00:10:51.461 "num_copy_ops": 0, 00:10:51.461 "read_latency_ticks": 2441486866704, 00:10:51.461 "max_read_latency_ticks": 15569650, 00:10:51.461 "min_read_latency_ticks": 278706, 00:10:51.461 "write_latency_ticks": 0, 00:10:51.461 "max_write_latency_ticks": 0, 00:10:51.461 "min_write_latency_ticks": 0, 00:10:51.461 "unmap_latency_ticks": 0, 00:10:51.461 "max_unmap_latency_ticks": 0, 00:10:51.461 "min_unmap_latency_ticks": 0, 00:10:51.461 "copy_latency_ticks": 0, 00:10:51.461 "max_copy_latency_ticks": 0, 00:10:51.461 "min_copy_latency_ticks": 0, 00:10:51.461 "io_error": {}, 00:10:51.461 "queue_depth_polling_period": 10, 00:10:51.461 "queue_depth": 512, 00:10:51.461 "io_time": 20, 00:10:51.461 "weighted_io_time": 10240 00:10:51.461 } 00:10:51.461 ] 00:10:51.461 }' 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.461 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:51.461 00:10:51.461 Latency(us) 00:10:51.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.461 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:10:51.461 Malloc_QD : 1.98 50836.36 198.58 0.00 0.00 5023.61 1336.93 6239.03 00:10:51.461 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:10:51.461 Malloc_QD : 1.98 51447.10 200.97 0.00 0.00 4964.29 871.63 6134.17 00:10:51.461 =================================================================================================================== 00:10:51.462 Total : 102283.46 399.54 0.00 0.00 4993.76 871.63 6239.03 00:10:51.462 0 00:10:51.462 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.462 13:10:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 646287 00:10:51.462 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # '[' -z 646287 ']' 00:10:51.462 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # kill -0 646287 00:10:51.462 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # uname 00:10:51.462 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:51.462 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 646287 00:10:51.462 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:51.462 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:51.462 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # echo 'killing process with pid 646287' 00:10:51.462 killing process with pid 646287 00:10:51.462 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@969 -- # kill 646287 00:10:51.462 Received shutdown signal, test time was about 2.064499 seconds 00:10:51.462 00:10:51.462 Latency(us) 00:10:51.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:51.462 =================================================================================================================== 00:10:51.462 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:51.462 13:10:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@974 -- # wait 646287 00:10:51.722 13:10:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:10:51.722 00:10:51.722 real 0m3.392s 00:10:51.722 user 0m6.691s 00:10:51.722 sys 0m0.411s 00:10:51.722 13:10:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.722 13:10:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:10:51.722 ************************************ 00:10:51.722 END TEST bdev_qd_sampling 00:10:51.722 ************************************ 00:10:51.722 13:10:32 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:10:51.722 13:10:32 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:51.722 13:10:32 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.722 13:10:32 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:51.982 ************************************ 00:10:51.982 START TEST bdev_error 00:10:51.982 ************************************ 00:10:51.982 13:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # error_test_suite '' 00:10:51.982 13:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:10:51.982 13:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:10:51.982 13:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:10:51.982 13:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=646896 00:10:51.982 13:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 646896' 00:10:51.982 Process error testing pid: 646896 00:10:51.982 13:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 646896 00:10:51.982 13:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 646896 ']' 00:10:51.982 13:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.982 13:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:51.982 13:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.982 13:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:51.982 13:10:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:51.982 13:10:32 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:10:51.982 [2024-07-26 13:10:32.312462] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:10:51.982 [2024-07-26 13:10:32.312519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646896 ] 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:51.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:51.982 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:51.982 [2024-07-26 13:10:32.432666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.241 [2024-07-26 13:10:32.518289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.831 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:52.831 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:10:52.831 13:10:33 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:10:52.831 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.831 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:52.831 Dev_1 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.832 13:10:33 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.832 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:52.832 [ 00:10:52.832 { 00:10:52.832 "name": "Dev_1", 00:10:52.832 "aliases": [ 00:10:52.832 "70117372-42f6-46b8-8d38-91663c8b6ee4" 00:10:52.832 ], 00:10:52.832 "product_name": "Malloc disk", 00:10:52.832 "block_size": 512, 00:10:52.832 "num_blocks": 262144, 00:10:52.832 "uuid": "70117372-42f6-46b8-8d38-91663c8b6ee4", 00:10:52.832 "assigned_rate_limits": { 00:10:52.832 "rw_ios_per_sec": 0, 00:10:52.832 "rw_mbytes_per_sec": 0, 00:10:52.832 "r_mbytes_per_sec": 0, 00:10:52.832 "w_mbytes_per_sec": 0 00:10:52.832 }, 00:10:52.832 "claimed": false, 00:10:52.832 "zoned": false, 00:10:52.832 "supported_io_types": { 00:10:52.832 "read": true, 00:10:52.832 "write": true, 00:10:52.832 "unmap": true, 00:10:52.832 "flush": true, 00:10:52.832 "reset": true, 00:10:52.832 "nvme_admin": false, 00:10:52.832 "nvme_io": false, 00:10:52.832 "nvme_io_md": false, 00:10:52.832 "write_zeroes": true, 00:10:52.832 "zcopy": true, 00:10:52.832 "get_zone_info": false, 00:10:52.832 "zone_management": false, 00:10:52.832 "zone_append": false, 00:10:52.832 "compare": false, 00:10:52.832 "compare_and_write": false, 00:10:52.832 "abort": true, 00:10:52.832 "seek_hole": false, 00:10:52.832 "seek_data": false, 00:10:52.832 "copy": true, 00:10:52.832 "nvme_iov_md": false 00:10:52.832 }, 00:10:52.832 "memory_domains": [ 00:10:52.833 { 00:10:52.833 "dma_device_id": "system", 00:10:52.833 "dma_device_type": 1 00:10:52.833 }, 00:10:52.833 { 00:10:52.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.833 "dma_device_type": 2 00:10:52.833 } 00:10:52.833 ], 00:10:52.833 "driver_specific": {} 00:10:52.833 } 00:10:52.833 ] 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:10:52.833 13:10:33 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:52.833 true 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.833 13:10:33 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:52.833 Dev_2 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.833 13:10:33 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:52.833 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:52.834 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:52.834 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.834 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:52.834 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.834 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:10:52.834 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.834 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:52.834 [ 00:10:52.834 { 00:10:52.834 "name": "Dev_2", 00:10:52.834 "aliases": [ 00:10:52.834 "547d6931-4e7f-452d-88b1-faa77348d8cc" 00:10:52.834 ], 00:10:52.834 "product_name": "Malloc disk", 00:10:52.834 "block_size": 512, 00:10:52.834 "num_blocks": 262144, 00:10:52.834 "uuid": "547d6931-4e7f-452d-88b1-faa77348d8cc", 00:10:52.834 "assigned_rate_limits": { 00:10:52.834 "rw_ios_per_sec": 0, 00:10:52.834 "rw_mbytes_per_sec": 0, 00:10:52.834 "r_mbytes_per_sec": 0, 00:10:52.834 "w_mbytes_per_sec": 0 00:10:52.834 }, 00:10:52.834 "claimed": false, 00:10:52.834 "zoned": false, 00:10:52.834 "supported_io_types": { 00:10:52.834 "read": true, 00:10:52.834 "write": true, 00:10:52.834 "unmap": true, 00:10:52.834 "flush": true, 00:10:52.834 "reset": true, 00:10:52.834 "nvme_admin": false, 00:10:52.834 "nvme_io": false, 00:10:52.835 "nvme_io_md": false, 00:10:52.835 "write_zeroes": true, 00:10:52.835 "zcopy": true, 00:10:52.835 "get_zone_info": false, 00:10:52.835 "zone_management": false, 00:10:52.835 "zone_append": false, 00:10:52.835 "compare": false, 00:10:52.835 "compare_and_write": false, 00:10:52.835 "abort": true, 00:10:52.835 "seek_hole": false, 00:10:52.835 "seek_data": false, 00:10:52.835 "copy": true, 00:10:52.835 "nvme_iov_md": false 00:10:52.835 }, 00:10:52.835 "memory_domains": [ 00:10:52.835 { 00:10:52.835 "dma_device_id": "system", 00:10:52.835 "dma_device_type": 1 00:10:52.835 }, 00:10:52.835 { 00:10:52.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.835 "dma_device_type": 2 00:10:52.835 } 00:10:52.835 ], 00:10:52.835 "driver_specific": {} 00:10:52.835 } 00:10:52.835 ] 00:10:52.835 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.835 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:10:52.835 13:10:33 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:10:52.835 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.835 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:52.835 13:10:33 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.835 13:10:33 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:10:52.835 13:10:33 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:10:53.103 Running I/O for 5 seconds... 00:10:54.040 13:10:34 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 646896 00:10:54.040 13:10:34 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 646896' 00:10:54.040 Process is existed as continue on error is set. Pid: 646896 00:10:54.040 13:10:34 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:10:54.040 13:10:34 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.040 13:10:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:54.040 13:10:34 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.040 13:10:34 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:10:54.040 13:10:34 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.040 13:10:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:54.040 13:10:34 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.040 13:10:34 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:10:54.040 Timeout while waiting for response: 00:10:54.040 00:10:54.040 00:10:58.274 00:10:58.274 Latency(us) 00:10:58.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:58.274 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:10:58.274 EE_Dev_1 : 0.79 40500.59 158.21 6.35 0.00 391.67 119.60 642.25 00:10:58.274 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:10:58.274 Dev_2 : 5.00 88946.89 347.45 0.00 0.00 176.63 61.03 18769.51 00:10:58.274 =================================================================================================================== 00:10:58.274 Total : 129447.48 505.65 6.35 0.00 191.01 61.03 18769.51 00:10:58.842 13:10:39 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 646896 00:10:58.842 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # '[' -z 646896 ']' 00:10:58.842 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # kill -0 646896 00:10:58.842 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # uname 00:10:59.101 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:59.101 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 646896 00:10:59.101 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:59.101 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:59.101 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 646896' 00:10:59.101 killing process with pid 646896 00:10:59.101 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@969 -- # kill 646896 00:10:59.101 Received shutdown signal, test time was about 5.000000 seconds 00:10:59.101 00:10:59.101 Latency(us) 00:10:59.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.101 =================================================================================================================== 00:10:59.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:59.101 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@974 -- # wait 646896 00:10:59.360 13:10:39 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=648142 00:10:59.360 13:10:39 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 648142' 00:10:59.360 Process error testing pid: 648142 00:10:59.360 13:10:39 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 648142 00:10:59.360 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 648142 ']' 00:10:59.360 13:10:39 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:10:59.360 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.360 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.360 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.360 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.360 13:10:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:59.360 [2024-07-26 13:10:39.691111] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:10:59.360 [2024-07-26 13:10:39.691181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648142 ] 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.360 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:59.360 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.361 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:59.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.361 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:59.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.361 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:59.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.361 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:59.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.361 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:59.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.361 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:59.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.361 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:59.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.361 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:59.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.361 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:59.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.361 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:59.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.361 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:59.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:59.361 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:59.361 [2024-07-26 13:10:39.810779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.619 [2024-07-26 13:10:39.899117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:10:59.879 13:10:40 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:59.879 Dev_1 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.879 13:10:40 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:59.879 [ 00:10:59.879 { 00:10:59.879 "name": "Dev_1", 00:10:59.879 "aliases": [ 00:10:59.879 "1dd5d27d-9cc2-42b0-828e-a6e73251b166" 00:10:59.879 ], 00:10:59.879 "product_name": "Malloc disk", 00:10:59.879 "block_size": 512, 00:10:59.879 "num_blocks": 262144, 00:10:59.879 "uuid": "1dd5d27d-9cc2-42b0-828e-a6e73251b166", 00:10:59.879 "assigned_rate_limits": { 00:10:59.879 "rw_ios_per_sec": 0, 00:10:59.879 "rw_mbytes_per_sec": 0, 00:10:59.879 "r_mbytes_per_sec": 0, 00:10:59.879 "w_mbytes_per_sec": 0 00:10:59.879 }, 00:10:59.879 "claimed": false, 00:10:59.879 "zoned": false, 00:10:59.879 "supported_io_types": { 00:10:59.879 "read": true, 00:10:59.879 "write": true, 00:10:59.879 "unmap": true, 00:10:59.879 "flush": true, 00:10:59.879 "reset": true, 00:10:59.879 "nvme_admin": false, 00:10:59.879 "nvme_io": false, 00:10:59.879 "nvme_io_md": false, 00:10:59.879 "write_zeroes": true, 00:10:59.879 "zcopy": true, 00:10:59.879 "get_zone_info": false, 00:10:59.879 "zone_management": false, 00:10:59.879 "zone_append": false, 00:10:59.879 "compare": false, 00:10:59.879 "compare_and_write": false, 00:10:59.879 "abort": true, 00:10:59.879 "seek_hole": false, 00:10:59.879 "seek_data": false, 00:10:59.879 "copy": true, 00:10:59.879 "nvme_iov_md": false 00:10:59.879 }, 00:10:59.879 "memory_domains": [ 00:10:59.879 { 00:10:59.879 "dma_device_id": "system", 00:10:59.879 "dma_device_type": 1 00:10:59.879 }, 00:10:59.879 { 00:10:59.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.879 "dma_device_type": 2 00:10:59.879 } 00:10:59.879 ], 00:10:59.879 "driver_specific": {} 00:10:59.879 } 00:10:59.879 ] 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:10:59.879 13:10:40 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:59.879 true 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.879 13:10:40 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:59.879 Dev_2 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.879 13:10:40 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.879 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:59.879 [ 00:10:59.879 { 00:10:59.879 "name": "Dev_2", 00:10:59.879 "aliases": [ 00:10:59.879 "d84f6152-dcc8-43c4-92ae-839cc09b316a" 00:10:59.879 ], 00:10:59.879 "product_name": "Malloc disk", 00:10:59.879 "block_size": 512, 00:10:59.879 "num_blocks": 262144, 00:10:59.879 "uuid": "d84f6152-dcc8-43c4-92ae-839cc09b316a", 00:10:59.879 "assigned_rate_limits": { 00:10:59.879 "rw_ios_per_sec": 0, 00:10:59.879 "rw_mbytes_per_sec": 0, 00:10:59.879 "r_mbytes_per_sec": 0, 00:10:59.879 "w_mbytes_per_sec": 0 00:10:59.879 }, 00:10:59.879 "claimed": false, 00:10:59.879 "zoned": false, 00:10:59.879 "supported_io_types": { 00:10:59.879 "read": true, 00:10:59.879 "write": true, 00:10:59.879 "unmap": true, 00:10:59.879 "flush": true, 00:10:59.879 "reset": true, 00:10:59.879 "nvme_admin": false, 00:10:59.879 "nvme_io": false, 00:10:59.879 "nvme_io_md": false, 00:10:59.879 "write_zeroes": true, 00:10:59.879 "zcopy": true, 00:10:59.880 "get_zone_info": false, 00:10:59.880 "zone_management": false, 00:10:59.880 "zone_append": false, 00:10:59.880 "compare": false, 00:10:59.880 "compare_and_write": false, 00:10:59.880 "abort": true, 00:10:59.880 "seek_hole": false, 00:10:59.880 "seek_data": false, 00:10:59.880 "copy": true, 00:10:59.880 "nvme_iov_md": false 00:10:59.880 }, 00:10:59.880 "memory_domains": [ 00:10:59.880 { 00:10:59.880 "dma_device_id": "system", 00:10:59.880 "dma_device_type": 1 00:10:59.880 }, 00:10:59.880 { 00:10:59.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.880 "dma_device_type": 2 00:10:59.880 } 00:10:59.880 ], 00:10:59.880 "driver_specific": {} 00:10:59.880 } 00:10:59.880 ] 00:10:59.880 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.880 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:10:59.880 13:10:40 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:10:59.880 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.880 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:10:59.880 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.880 13:10:40 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 648142 00:10:59.880 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # local es=0 00:10:59.880 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # valid_exec_arg wait 648142 00:10:59.880 13:10:40 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:10:59.880 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@638 -- # local arg=wait 00:10:59.880 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:59.880 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # type -t wait 00:10:59.880 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:59.880 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # wait 648142 00:11:00.139 Running I/O for 5 seconds... 00:11:00.139 task offset: 234200 on job bdev=EE_Dev_1 fails 00:11:00.139 00:11:00.139 Latency(us) 00:11:00.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.139 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:00.139 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:11:00.139 EE_Dev_1 : 0.00 31791.91 124.19 7225.43 0.00 344.53 118.78 609.48 00:11:00.139 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:11:00.139 Dev_2 : 0.00 19740.90 77.11 0.00 0.00 608.61 115.51 1133.77 00:11:00.139 =================================================================================================================== 00:11:00.139 Total : 51532.81 201.30 7225.43 0.00 487.76 115.51 1133.77 00:11:00.139 [2024-07-26 13:10:40.417793] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:00.139 request: 00:11:00.139 { 00:11:00.139 "method": "perform_tests", 00:11:00.139 "req_id": 1 00:11:00.139 } 00:11:00.139 Got JSON-RPC error response 00:11:00.139 response: 00:11:00.139 { 00:11:00.139 "code": -32603, 00:11:00.139 "message": "bdevperf failed with error Operation not permitted" 00:11:00.139 } 00:11:00.398 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # es=255 00:11:00.398 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:00.398 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # es=127 00:11:00.398 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # case "$es" in 00:11:00.398 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@670 -- # es=1 00:11:00.398 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:00.398 00:11:00.398 real 0m8.413s 00:11:00.398 user 0m8.945s 00:11:00.398 sys 0m0.807s 00:11:00.398 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.398 13:10:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:11:00.398 ************************************ 00:11:00.398 END TEST bdev_error 00:11:00.398 ************************************ 00:11:00.398 13:10:40 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:11:00.398 13:10:40 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:00.398 13:10:40 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.398 13:10:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:00.398 ************************************ 00:11:00.398 START TEST bdev_stat 00:11:00.398 ************************************ 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # stat_test_suite '' 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=648256 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 648256' 00:11:00.398 Process Bdev IO statistics testing pid: 648256 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 648256 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # '[' -z 648256 ']' 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.398 13:10:40 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:00.398 [2024-07-26 13:10:40.809008] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:11:00.399 [2024-07-26 13:10:40.809070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648256 ] 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:00.399 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:00.399 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:00.657 [2024-07-26 13:10:40.942455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:00.657 [2024-07-26 13:10:41.030378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.657 [2024-07-26 13:10:41.030385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # return 0 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:01.224 Malloc_STAT 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_STAT 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # local i 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:01.224 [ 00:11:01.224 { 00:11:01.224 "name": "Malloc_STAT", 00:11:01.224 "aliases": [ 00:11:01.224 "f4eaa72e-f1b4-4c6c-9917-ad8ff72c3e6d" 00:11:01.224 ], 00:11:01.224 "product_name": "Malloc disk", 00:11:01.224 "block_size": 512, 00:11:01.224 "num_blocks": 262144, 00:11:01.224 "uuid": "f4eaa72e-f1b4-4c6c-9917-ad8ff72c3e6d", 00:11:01.224 "assigned_rate_limits": { 00:11:01.224 "rw_ios_per_sec": 0, 00:11:01.224 "rw_mbytes_per_sec": 0, 00:11:01.224 "r_mbytes_per_sec": 0, 00:11:01.224 "w_mbytes_per_sec": 0 00:11:01.224 }, 00:11:01.224 "claimed": false, 00:11:01.224 "zoned": false, 00:11:01.224 "supported_io_types": { 00:11:01.224 "read": true, 00:11:01.224 "write": true, 00:11:01.224 "unmap": true, 00:11:01.224 "flush": true, 00:11:01.224 "reset": true, 00:11:01.224 "nvme_admin": false, 00:11:01.224 "nvme_io": false, 00:11:01.224 "nvme_io_md": false, 00:11:01.224 "write_zeroes": true, 00:11:01.224 "zcopy": true, 00:11:01.224 "get_zone_info": false, 00:11:01.224 "zone_management": false, 00:11:01.224 "zone_append": false, 00:11:01.224 "compare": false, 00:11:01.224 "compare_and_write": false, 00:11:01.224 "abort": true, 00:11:01.224 "seek_hole": false, 00:11:01.224 "seek_data": false, 00:11:01.224 "copy": true, 00:11:01.224 "nvme_iov_md": false 00:11:01.224 }, 00:11:01.224 "memory_domains": [ 00:11:01.224 { 00:11:01.224 "dma_device_id": "system", 00:11:01.224 "dma_device_type": 1 00:11:01.224 }, 00:11:01.224 { 00:11:01.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.224 "dma_device_type": 2 00:11:01.224 } 00:11:01.224 ], 00:11:01.224 "driver_specific": {} 00:11:01.224 } 00:11:01.224 ] 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- common/autotest_common.sh@907 -- # return 0 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:11:01.224 13:10:41 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:01.483 Running I/O for 10 seconds... 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:11:03.389 "tick_rate": 2500000000, 00:11:03.389 "ticks": 14479413865731268, 00:11:03.389 "bdevs": [ 00:11:03.389 { 00:11:03.389 "name": "Malloc_STAT", 00:11:03.389 "bytes_read": 808497664, 00:11:03.389 "num_read_ops": 197380, 00:11:03.389 "bytes_written": 0, 00:11:03.389 "num_write_ops": 0, 00:11:03.389 "bytes_unmapped": 0, 00:11:03.389 "num_unmap_ops": 0, 00:11:03.389 "bytes_copied": 0, 00:11:03.389 "num_copy_ops": 0, 00:11:03.389 "read_latency_ticks": 2431168836464, 00:11:03.389 "max_read_latency_ticks": 14860188, 00:11:03.389 "min_read_latency_ticks": 281016, 00:11:03.389 "write_latency_ticks": 0, 00:11:03.389 "max_write_latency_ticks": 0, 00:11:03.389 "min_write_latency_ticks": 0, 00:11:03.389 "unmap_latency_ticks": 0, 00:11:03.389 "max_unmap_latency_ticks": 0, 00:11:03.389 "min_unmap_latency_ticks": 0, 00:11:03.389 "copy_latency_ticks": 0, 00:11:03.389 "max_copy_latency_ticks": 0, 00:11:03.389 "min_copy_latency_ticks": 0, 00:11:03.389 "io_error": {} 00:11:03.389 } 00:11:03.389 ] 00:11:03.389 }' 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=197380 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:11:03.389 "tick_rate": 2500000000, 00:11:03.389 "ticks": 14479414027857896, 00:11:03.389 "name": "Malloc_STAT", 00:11:03.389 "channels": [ 00:11:03.389 { 00:11:03.389 "thread_id": 2, 00:11:03.389 "bytes_read": 416284672, 00:11:03.389 "num_read_ops": 101632, 00:11:03.389 "bytes_written": 0, 00:11:03.389 "num_write_ops": 0, 00:11:03.389 "bytes_unmapped": 0, 00:11:03.389 "num_unmap_ops": 0, 00:11:03.389 "bytes_copied": 0, 00:11:03.389 "num_copy_ops": 0, 00:11:03.389 "read_latency_ticks": 1255867850498, 00:11:03.389 "max_read_latency_ticks": 13172780, 00:11:03.389 "min_read_latency_ticks": 8143972, 00:11:03.389 "write_latency_ticks": 0, 00:11:03.389 "max_write_latency_ticks": 0, 00:11:03.389 "min_write_latency_ticks": 0, 00:11:03.389 "unmap_latency_ticks": 0, 00:11:03.389 "max_unmap_latency_ticks": 0, 00:11:03.389 "min_unmap_latency_ticks": 0, 00:11:03.389 "copy_latency_ticks": 0, 00:11:03.389 "max_copy_latency_ticks": 0, 00:11:03.389 "min_copy_latency_ticks": 0 00:11:03.389 }, 00:11:03.389 { 00:11:03.389 "thread_id": 3, 00:11:03.389 "bytes_read": 419430400, 00:11:03.389 "num_read_ops": 102400, 00:11:03.389 "bytes_written": 0, 00:11:03.389 "num_write_ops": 0, 00:11:03.389 "bytes_unmapped": 0, 00:11:03.389 "num_unmap_ops": 0, 00:11:03.389 "bytes_copied": 0, 00:11:03.389 "num_copy_ops": 0, 00:11:03.389 "read_latency_ticks": 1257354560068, 00:11:03.389 "max_read_latency_ticks": 14860188, 00:11:03.389 "min_read_latency_ticks": 8310546, 00:11:03.389 "write_latency_ticks": 0, 00:11:03.389 "max_write_latency_ticks": 0, 00:11:03.389 "min_write_latency_ticks": 0, 00:11:03.389 "unmap_latency_ticks": 0, 00:11:03.389 "max_unmap_latency_ticks": 0, 00:11:03.389 "min_unmap_latency_ticks": 0, 00:11:03.389 "copy_latency_ticks": 0, 00:11:03.389 "max_copy_latency_ticks": 0, 00:11:03.389 "min_copy_latency_ticks": 0 00:11:03.389 } 00:11:03.389 ] 00:11:03.389 }' 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=101632 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=101632 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=102400 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=204032 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.389 13:10:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:03.648 13:10:43 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.648 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:11:03.648 "tick_rate": 2500000000, 00:11:03.648 "ticks": 14479414325249648, 00:11:03.648 "bdevs": [ 00:11:03.648 { 00:11:03.648 "name": "Malloc_STAT", 00:11:03.648 "bytes_read": 886092288, 00:11:03.648 "num_read_ops": 216324, 00:11:03.648 "bytes_written": 0, 00:11:03.648 "num_write_ops": 0, 00:11:03.648 "bytes_unmapped": 0, 00:11:03.648 "num_unmap_ops": 0, 00:11:03.648 "bytes_copied": 0, 00:11:03.648 "num_copy_ops": 0, 00:11:03.648 "read_latency_ticks": 2664748271908, 00:11:03.648 "max_read_latency_ticks": 14860188, 00:11:03.648 "min_read_latency_ticks": 281016, 00:11:03.648 "write_latency_ticks": 0, 00:11:03.648 "max_write_latency_ticks": 0, 00:11:03.648 "min_write_latency_ticks": 0, 00:11:03.648 "unmap_latency_ticks": 0, 00:11:03.648 "max_unmap_latency_ticks": 0, 00:11:03.648 "min_unmap_latency_ticks": 0, 00:11:03.648 "copy_latency_ticks": 0, 00:11:03.648 "max_copy_latency_ticks": 0, 00:11:03.648 "min_copy_latency_ticks": 0, 00:11:03.648 "io_error": {} 00:11:03.648 } 00:11:03.648 ] 00:11:03.648 }' 00:11:03.648 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:11:03.648 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=216324 00:11:03.648 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 204032 -lt 197380 ']' 00:11:03.648 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 204032 -gt 216324 ']' 00:11:03.648 13:10:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:11:03.648 13:10:43 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.648 13:10:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:03.648 00:11:03.649 Latency(us) 00:11:03.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.649 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:11:03.649 Malloc_STAT : 2.16 51688.17 201.91 0.00 0.00 4941.19 1336.93 5426.38 00:11:03.649 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:11:03.649 Malloc_STAT : 2.16 52100.27 203.52 0.00 0.00 4902.62 930.61 5950.67 00:11:03.649 =================================================================================================================== 00:11:03.649 Total : 103788.44 405.42 0.00 0.00 4921.82 930.61 5950.67 00:11:03.649 0 00:11:03.649 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.649 13:10:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 648256 00:11:03.649 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # '[' -z 648256 ']' 00:11:03.649 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # kill -0 648256 00:11:03.649 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # uname 00:11:03.649 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:03.649 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 648256 00:11:03.649 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:03.649 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:03.649 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 648256' 00:11:03.649 killing process with pid 648256 00:11:03.649 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@969 -- # kill 648256 00:11:03.649 Received shutdown signal, test time was about 2.246475 seconds 00:11:03.649 00:11:03.649 Latency(us) 00:11:03.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.649 =================================================================================================================== 00:11:03.649 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:03.649 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@974 -- # wait 648256 00:11:03.908 13:10:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:11:03.908 00:11:03.908 real 0m3.509s 00:11:03.908 user 0m6.963s 00:11:03.908 sys 0m0.431s 00:11:03.908 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.908 13:10:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:11:03.908 ************************************ 00:11:03.908 END TEST bdev_stat 00:11:03.908 ************************************ 00:11:03.908 13:10:44 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:11:03.908 13:10:44 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:11:03.908 13:10:44 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:11:03.908 13:10:44 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:11:03.908 13:10:44 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:11:03.908 13:10:44 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:11:03.908 13:10:44 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:11:03.908 13:10:44 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:11:03.908 13:10:44 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:11:03.908 13:10:44 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:11:03.908 00:11:03.908 real 1m53.000s 00:11:03.908 user 7m22.789s 00:11:03.908 sys 0m21.389s 00:11:03.908 13:10:44 blockdev_general -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:03.908 13:10:44 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:03.908 ************************************ 00:11:03.908 END TEST blockdev_general 00:11:03.908 ************************************ 00:11:03.908 13:10:44 -- spdk/autotest.sh@194 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:11:03.908 13:10:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:03.908 13:10:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:03.908 13:10:44 -- common/autotest_common.sh@10 -- # set +x 00:11:03.908 ************************************ 00:11:03.908 START TEST bdev_raid 00:11:03.908 ************************************ 00:11:03.908 13:10:44 bdev_raid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:11:04.168 * Looking for test storage... 00:11:04.168 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:11:04.168 13:10:44 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:11:04.168 13:10:44 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:11:04.168 13:10:44 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:11:04.168 13:10:44 bdev_raid -- bdev/bdev_raid.sh@927 -- # mkdir -p /raidtest 00:11:04.168 13:10:44 bdev_raid -- bdev/bdev_raid.sh@928 -- # trap 'cleanup; exit 1' EXIT 00:11:04.168 13:10:44 bdev_raid -- bdev/bdev_raid.sh@930 -- # base_blocklen=512 00:11:04.168 13:10:44 bdev_raid -- bdev/bdev_raid.sh@932 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:11:04.168 13:10:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:04.168 13:10:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.168 13:10:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:04.168 ************************************ 00:11:04.168 START TEST raid0_resize_superblock_test 00:11:04.168 ************************************ 00:11:04.168 13:10:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:11:04.168 13:10:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=0 00:11:04.168 13:10:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=649123 00:11:04.168 13:10:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 649123' 00:11:04.168 Process raid pid: 649123 00:11:04.168 13:10:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:04.168 13:10:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 649123 /var/tmp/spdk-raid.sock 00:11:04.168 13:10:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 649123 ']' 00:11:04.168 13:10:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:04.168 13:10:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.168 13:10:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:04.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:04.168 13:10:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.168 13:10:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.168 [2024-07-26 13:10:44.608681] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:11:04.168 [2024-07-26 13:10:44.608740] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.168 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:04.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.168 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:04.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.168 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:04.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.168 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:04.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.168 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:04.168 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:04.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:04.169 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:04.428 [2024-07-26 13:10:44.742638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.428 [2024-07-26 13:10:44.829394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.428 [2024-07-26 13:10:44.886793] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.428 [2024-07-26 13:10:44.886825] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.995 13:10:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.995 13:10:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:04.995 13:10:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:11:05.564 malloc0 00:11:05.564 13:10:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:11:05.564 [2024-07-26 13:10:46.064636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:05.564 [2024-07-26 13:10:46.064682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.564 [2024-07-26 13:10:46.064704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1633c60 00:11:05.564 [2024-07-26 13:10:46.064715] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.564 [2024-07-26 13:10:46.066260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.564 [2024-07-26 13:10:46.066289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:05.564 pt0 00:11:05.564 13:10:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:11:06.133 36c2d6ec-a6d0-49eb-8db6-87aa61e6c7b2 00:11:06.133 13:10:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:11:06.133 cef325de-bc28-4eda-94ba-1b3e035cf46d 00:11:06.133 13:10:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:11:06.392 ffdd9ecf-31cc-4ced-a7aa-890080611d02 00:11:06.392 13:10:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:11:06.392 13:10:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@884 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 0 -z 64 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:11:06.650 [2024-07-26 13:10:47.050330] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev cef325de-bc28-4eda-94ba-1b3e035cf46d is claimed 00:11:06.650 [2024-07-26 13:10:47.050412] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev ffdd9ecf-31cc-4ced-a7aa-890080611d02 is claimed 00:11:06.650 [2024-07-26 13:10:47.050528] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x17df9f0 00:11:06.650 [2024-07-26 13:10:47.050539] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:11:06.650 [2024-07-26 13:10:47.050722] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17e2810 00:11:06.650 [2024-07-26 13:10:47.050866] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17df9f0 00:11:06.650 [2024-07-26 13:10:47.050876] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x17df9f0 00:11:06.650 [2024-07-26 13:10:47.050986] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.650 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:11:06.650 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:06.908 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:11:06.908 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:11:06.908 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:07.166 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:11:07.166 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:07.166 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:07.166 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:07.166 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:11:07.424 [2024-07-26 13:10:47.724279] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.424 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:07.424 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:07.424 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 245760 == 245760 )) 00:11:07.424 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:11:07.424 [2024-07-26 13:10:47.948813] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:07.424 [2024-07-26 13:10:47.948835] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cef325de-bc28-4eda-94ba-1b3e035cf46d' was resized: old size 131072, new size 204800 00:11:07.682 13:10:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:11:07.682 [2024-07-26 13:10:48.173349] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:07.682 [2024-07-26 13:10:48.173366] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ffdd9ecf-31cc-4ced-a7aa-890080611d02' was resized: old size 131072, new size 204800 00:11:07.682 [2024-07-26 13:10:48.173386] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:11:07.682 13:10:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:11:07.682 13:10:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:11:07.941 13:10:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:11:07.941 13:10:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:11:07.941 13:10:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:11:08.199 13:10:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:11:08.199 13:10:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:08.199 13:10:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:08.199 13:10:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:08.199 13:10:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # jq '.[].num_blocks' 00:11:08.765 [2024-07-26 13:10:49.120098] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.765 13:10:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:08.765 13:10:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:08.765 13:10:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # (( 393216 == 393216 )) 00:11:08.765 13:10:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:11:09.023 [2024-07-26 13:10:49.360527] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:09.023 [2024-07-26 13:10:49.360579] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:09.023 [2024-07-26 13:10:49.360588] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.023 [2024-07-26 13:10:49.360599] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:09.023 [2024-07-26 13:10:49.360672] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.023 [2024-07-26 13:10:49.360703] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.023 [2024-07-26 13:10:49.360714] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17df9f0 name Raid, state offline 00:11:09.023 13:10:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:11:09.589 [2024-07-26 13:10:49.853781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:09.589 [2024-07-26 13:10:49.853821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:09.589 [2024-07-26 13:10:49.853841] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17e5ee0 00:11:09.589 [2024-07-26 13:10:49.853853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:09.589 [2024-07-26 13:10:49.855346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:09.589 [2024-07-26 13:10:49.855373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:09.589 [2024-07-26 13:10:49.856485] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cef325de-bc28-4eda-94ba-1b3e035cf46d 00:11:09.589 [2024-07-26 13:10:49.856519] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev cef325de-bc28-4eda-94ba-1b3e035cf46d is claimed 00:11:09.589 [2024-07-26 13:10:49.856599] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ffdd9ecf-31cc-4ced-a7aa-890080611d02 00:11:09.589 [2024-07-26 13:10:49.856616] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev ffdd9ecf-31cc-4ced-a7aa-890080611d02 is claimed 00:11:09.589 [2024-07-26 13:10:49.856717] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ffdd9ecf-31cc-4ced-a7aa-890080611d02 (2) smaller than existing raid bdev Raid (3) 00:11:09.589 [2024-07-26 13:10:49.856745] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x17de5a0 00:11:09.589 [2024-07-26 13:10:49.856752] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:11:09.589 [2024-07-26 13:10:49.856906] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17e0de0 00:11:09.589 [2024-07-26 13:10:49.857037] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17de5a0 00:11:09.589 [2024-07-26 13:10:49.857046] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x17de5a0 00:11:09.589 [2024-07-26 13:10:49.857156] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.589 pt0 00:11:09.589 13:10:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:09.589 13:10:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:09.589 13:10:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:09.589 13:10:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # jq '.[].num_blocks' 00:11:09.589 [2024-07-26 13:10:50.086635] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.589 13:10:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:09.589 13:10:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:09.589 13:10:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # (( 393216 == 393216 )) 00:11:09.589 13:10:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 649123 00:11:09.589 13:10:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 649123 ']' 00:11:09.589 13:10:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 649123 00:11:09.589 13:10:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:09.589 13:10:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:09.589 13:10:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 649123 00:11:09.847 13:10:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:09.847 13:10:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:09.847 13:10:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 649123' 00:11:09.847 killing process with pid 649123 00:11:09.847 13:10:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 649123 00:11:09.847 [2024-07-26 13:10:50.163405] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.847 [2024-07-26 13:10:50.163455] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.847 [2024-07-26 13:10:50.163490] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.847 [2024-07-26 13:10:50.163500] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17de5a0 name Raid, state offline 00:11:09.847 13:10:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 649123 00:11:09.847 [2024-07-26 13:10:50.242568] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.105 13:10:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:11:10.105 00:11:10.105 real 0m5.880s 00:11:10.105 user 0m9.718s 00:11:10.105 sys 0m1.193s 00:11:10.105 13:10:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.105 13:10:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.105 ************************************ 00:11:10.105 END TEST raid0_resize_superblock_test 00:11:10.105 ************************************ 00:11:10.105 13:10:50 bdev_raid -- bdev/bdev_raid.sh@933 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:11:10.105 13:10:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:10.105 13:10:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.105 13:10:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.105 ************************************ 00:11:10.105 START TEST raid1_resize_superblock_test 00:11:10.105 ************************************ 00:11:10.105 13:10:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:11:10.105 13:10:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=1 00:11:10.105 13:10:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:10.105 13:10:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=650131 00:11:10.105 13:10:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 650131' 00:11:10.105 Process raid pid: 650131 00:11:10.105 13:10:50 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 650131 /var/tmp/spdk-raid.sock 00:11:10.105 13:10:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 650131 ']' 00:11:10.105 13:10:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:10.105 13:10:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:10.105 13:10:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:10.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:10.105 13:10:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:10.105 13:10:50 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.105 [2024-07-26 13:10:50.561429] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:11:10.105 [2024-07-26 13:10:50.561486] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:10.364 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:10.364 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:10.364 [2024-07-26 13:10:50.694908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.364 [2024-07-26 13:10:50.780766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.364 [2024-07-26 13:10:50.835164] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.364 [2024-07-26 13:10:50.835189] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.333 13:10:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.333 13:10:51 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:11.333 13:10:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:11:11.333 malloc0 00:11:11.333 13:10:51 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:11:11.591 [2024-07-26 13:10:52.032665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:11.591 [2024-07-26 13:10:52.032707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.591 [2024-07-26 13:10:52.032728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13e7c60 00:11:11.591 [2024-07-26 13:10:52.032740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.591 [2024-07-26 13:10:52.034311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.591 [2024-07-26 13:10:52.034339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:11.591 pt0 00:11:11.591 13:10:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:11:11.850 7ee616db-dfa9-4b09-833a-4dd720fa623f 00:11:11.850 13:10:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:11:12.108 878c6546-204a-4c8b-90b7-b911f36820d7 00:11:12.108 13:10:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:11:12.367 d1481aee-e619-4d40-aef9-28da4d7ba422 00:11:12.367 13:10:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:11:12.367 13:10:52 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 1 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:11:12.625 [2024-07-26 13:10:53.021239] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 878c6546-204a-4c8b-90b7-b911f36820d7 is claimed 00:11:12.625 [2024-07-26 13:10:53.021318] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev d1481aee-e619-4d40-aef9-28da4d7ba422 is claimed 00:11:12.625 [2024-07-26 13:10:53.021439] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x15939f0 00:11:12.625 [2024-07-26 13:10:53.021450] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:11:12.625 [2024-07-26 13:10:53.021627] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1595000 00:11:12.625 [2024-07-26 13:10:53.021781] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15939f0 00:11:12.625 [2024-07-26 13:10:53.021791] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x15939f0 00:11:12.625 [2024-07-26 13:10:53.021904] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.625 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:11:12.625 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:11:12.896 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:11:12.896 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:11:12.896 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:11:13.158 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:11:13.159 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:13.159 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:13.159 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:13.159 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:11:13.417 [2024-07-26 13:10:53.695204] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.417 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:13.417 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:11:13.417 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 122880 == 122880 )) 00:11:13.417 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:11:13.417 [2024-07-26 13:10:53.923738] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:13.417 [2024-07-26 13:10:53.923758] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '878c6546-204a-4c8b-90b7-b911f36820d7' was resized: old size 131072, new size 204800 00:11:13.417 13:10:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:11:13.675 [2024-07-26 13:10:54.148304] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:13.675 [2024-07-26 13:10:54.148322] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd1481aee-e619-4d40-aef9-28da4d7ba422' was resized: old size 131072, new size 204800 00:11:13.675 [2024-07-26 13:10:54.148344] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:11:13.675 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:11:13.675 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:11:13.934 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:11:13.934 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:11:13.934 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:11:14.192 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:11:14.192 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:14.192 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:14.192 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:14.192 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # jq '.[].num_blocks' 00:11:14.450 [2024-07-26 13:10:54.826327] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:14.451 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:14.451 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:11:14.451 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # (( 196608 == 196608 )) 00:11:14.451 13:10:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:11:14.709 [2024-07-26 13:10:55.058737] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:11:14.709 [2024-07-26 13:10:55.058792] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:11:14.709 [2024-07-26 13:10:55.058814] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:11:14.709 [2024-07-26 13:10:55.058926] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.709 [2024-07-26 13:10:55.059057] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.709 [2024-07-26 13:10:55.059112] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.709 [2024-07-26 13:10:55.059124] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15939f0 name Raid, state offline 00:11:14.709 13:10:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:11:14.968 [2024-07-26 13:10:55.287309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:11:14.968 [2024-07-26 13:10:55.287342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.968 [2024-07-26 13:10:55.287359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1599ee0 00:11:14.968 [2024-07-26 13:10:55.287371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.968 [2024-07-26 13:10:55.288863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.968 [2024-07-26 13:10:55.288891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:11:14.968 [2024-07-26 13:10:55.289992] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 878c6546-204a-4c8b-90b7-b911f36820d7 00:11:14.968 [2024-07-26 13:10:55.290028] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 878c6546-204a-4c8b-90b7-b911f36820d7 is claimed 00:11:14.968 [2024-07-26 13:10:55.290105] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d1481aee-e619-4d40-aef9-28da4d7ba422 00:11:14.968 [2024-07-26 13:10:55.290122] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev d1481aee-e619-4d40-aef9-28da4d7ba422 is claimed 00:11:14.968 [2024-07-26 13:10:55.290236] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev d1481aee-e619-4d40-aef9-28da4d7ba422 (2) smaller than existing raid bdev Raid (3) 00:11:14.968 [2024-07-26 13:10:55.290270] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1597530 00:11:14.968 [2024-07-26 13:10:55.290277] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:14.968 [2024-07-26 13:10:55.290429] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1592060 00:11:14.968 [2024-07-26 13:10:55.290566] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1597530 00:11:14.968 [2024-07-26 13:10:55.290576] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x1597530 00:11:14.968 [2024-07-26 13:10:55.290672] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.968 pt0 00:11:14.968 13:10:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:14.968 13:10:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:14.968 13:10:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:14.968 13:10:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # jq '.[].num_blocks' 00:11:15.227 [2024-07-26 13:10:55.516143] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # (( 196608 == 196608 )) 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 650131 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 650131 ']' 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 650131 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 650131 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 650131' 00:11:15.227 killing process with pid 650131 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 650131 00:11:15.227 [2024-07-26 13:10:55.622301] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.227 [2024-07-26 13:10:55.622350] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.227 [2024-07-26 13:10:55.622388] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.227 [2024-07-26 13:10:55.622398] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1597530 name Raid, state offline 00:11:15.227 13:10:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 650131 00:11:15.227 [2024-07-26 13:10:55.700337] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.487 13:10:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:11:15.487 00:11:15.487 real 0m5.373s 00:11:15.487 user 0m8.747s 00:11:15.487 sys 0m1.151s 00:11:15.487 13:10:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:15.487 13:10:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.487 ************************************ 00:11:15.487 END TEST raid1_resize_superblock_test 00:11:15.487 ************************************ 00:11:15.487 13:10:55 bdev_raid -- bdev/bdev_raid.sh@935 -- # uname -s 00:11:15.487 13:10:55 bdev_raid -- bdev/bdev_raid.sh@935 -- # '[' Linux = Linux ']' 00:11:15.487 13:10:55 bdev_raid -- bdev/bdev_raid.sh@935 -- # modprobe -n nbd 00:11:15.487 13:10:55 bdev_raid -- bdev/bdev_raid.sh@936 -- # has_nbd=true 00:11:15.487 13:10:55 bdev_raid -- bdev/bdev_raid.sh@937 -- # modprobe nbd 00:11:15.487 13:10:55 bdev_raid -- bdev/bdev_raid.sh@938 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:11:15.487 13:10:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:15.487 13:10:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:15.487 13:10:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.487 ************************************ 00:11:15.487 START TEST raid_function_test_raid0 00:11:15.487 ************************************ 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=651088 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 651088' 00:11:15.487 Process raid pid: 651088 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 651088 /var/tmp/spdk-raid.sock 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 651088 ']' 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:15.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:15.487 13:10:55 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:15.746 [2024-07-26 13:10:56.046465] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:11:15.746 [2024-07-26 13:10:56.046523] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:15.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:15.746 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:15.746 [2024-07-26 13:10:56.181785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.746 [2024-07-26 13:10:56.262901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.005 [2024-07-26 13:10:56.326252] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.005 [2024-07-26 13:10:56.326287] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.573 13:10:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:16.573 13:10:56 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:11:16.573 13:10:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:11:16.573 13:10:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:11:16.573 13:10:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:11:16.573 13:10:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:11:16.573 13:10:56 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:11:16.842 [2024-07-26 13:10:57.191862] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:16.842 [2024-07-26 13:10:57.193263] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:16.842 [2024-07-26 13:10:57.193321] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xe4fa50 00:11:16.842 [2024-07-26 13:10:57.193331] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:16.842 [2024-07-26 13:10:57.193516] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcb2d00 00:11:16.842 [2024-07-26 13:10:57.193624] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe4fa50 00:11:16.842 [2024-07-26 13:10:57.193633] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0xe4fa50 00:11:16.842 [2024-07-26 13:10:57.193729] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.842 Base_1 00:11:16.842 Base_2 00:11:16.842 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:11:16.842 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:11:16.842 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:11:17.099 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:11:17.099 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:11:17.099 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:11:17.099 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:17.099 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:17.099 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:17.099 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:17.099 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:17.099 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:11:17.099 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:17.099 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:17.099 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:11:17.356 [2024-07-26 13:10:57.657118] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc92b30 00:11:17.356 /dev/nbd0 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.356 1+0 records in 00:11:17.356 1+0 records out 00:11:17.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000160661 s, 25.5 MB/s 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:17.356 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:17.612 { 00:11:17.612 "nbd_device": "/dev/nbd0", 00:11:17.612 "bdev_name": "raid" 00:11:17.612 } 00:11:17.612 ]' 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:17.612 { 00:11:17.612 "nbd_device": "/dev/nbd0", 00:11:17.612 "bdev_name": "raid" 00:11:17.612 } 00:11:17.612 ]' 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:17.612 13:10:57 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:17.612 4096+0 records in 00:11:17.612 4096+0 records out 00:11:17.612 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0182326 s, 115 MB/s 00:11:17.612 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:17.870 4096+0 records in 00:11:17.870 4096+0 records out 00:11:17.870 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.189952 s, 11.0 MB/s 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:17.870 128+0 records in 00:11:17.870 128+0 records out 00:11:17.870 65536 bytes (66 kB, 64 KiB) copied, 0.000821645 s, 79.8 MB/s 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:17.870 2035+0 records in 00:11:17.870 2035+0 records out 00:11:17.870 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0117616 s, 88.6 MB/s 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:17.870 456+0 records in 00:11:17.870 456+0 records out 00:11:17.870 233472 bytes (233 kB, 228 KiB) copied, 0.00269308 s, 86.7 MB/s 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:17.870 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:11:18.128 [2024-07-26 13:10:58.539928] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.128 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:18.128 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:18.128 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:18.128 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:18.128 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:18.128 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:18.128 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:11:18.128 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:11:18.128 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:11:18.128 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:18.128 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 651088 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 651088 ']' 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 651088 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 651088 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 651088' 00:11:18.387 killing process with pid 651088 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 651088 00:11:18.387 13:10:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 651088 00:11:18.387 [2024-07-26 13:10:58.879425] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.387 [2024-07-26 13:10:58.879489] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.387 [2024-07-26 13:10:58.879528] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.387 [2024-07-26 13:10:58.879539] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe4fa50 name raid, state offline 00:11:18.387 [2024-07-26 13:10:58.894982] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.645 13:10:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:11:18.645 00:11:18.645 real 0m3.093s 00:11:18.645 user 0m4.112s 00:11:18.645 sys 0m1.113s 00:11:18.645 13:10:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.645 13:10:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:11:18.645 ************************************ 00:11:18.645 END TEST raid_function_test_raid0 00:11:18.645 ************************************ 00:11:18.645 13:10:59 bdev_raid -- bdev/bdev_raid.sh@939 -- # run_test raid_function_test_concat raid_function_test concat 00:11:18.645 13:10:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:18.645 13:10:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.645 13:10:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.645 ************************************ 00:11:18.645 START TEST raid_function_test_concat 00:11:18.645 ************************************ 00:11:18.645 13:10:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:11:18.645 13:10:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:11:18.645 13:10:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:11:18.645 13:10:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:11:18.645 13:10:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=651702 00:11:18.646 13:10:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 651702' 00:11:18.646 Process raid pid: 651702 00:11:18.646 13:10:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 651702 /var/tmp/spdk-raid.sock 00:11:18.646 13:10:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 651702 ']' 00:11:18.646 13:10:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:18.646 13:10:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:18.646 13:10:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:18.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:18.646 13:10:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:18.646 13:10:59 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:18.646 13:10:59 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:18.905 [2024-07-26 13:10:59.213179] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:11:18.905 [2024-07-26 13:10:59.213239] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:18.905 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.905 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:18.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.906 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:18.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.906 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:18.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.906 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:18.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.906 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:18.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.906 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:18.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.906 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:18.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.906 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:18.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.906 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:18.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.906 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:18.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.906 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:18.906 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:18.906 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:18.906 [2024-07-26 13:10:59.346516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.164 [2024-07-26 13:10:59.432720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.164 [2024-07-26 13:10:59.495634] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.164 [2024-07-26 13:10:59.495671] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.097 13:11:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.097 13:11:00 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:11:20.097 13:11:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:11:20.097 13:11:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:11:20.097 13:11:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:11:20.097 13:11:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:11:20.097 13:11:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:11:20.355 [2024-07-26 13:11:00.630214] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:20.355 [2024-07-26 13:11:00.631549] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:20.355 [2024-07-26 13:11:00.631602] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x107aa50 00:11:20.355 [2024-07-26 13:11:00.631612] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:20.355 [2024-07-26 13:11:00.631785] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xeddd00 00:11:20.355 [2024-07-26 13:11:00.631889] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x107aa50 00:11:20.355 [2024-07-26 13:11:00.631898] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x107aa50 00:11:20.355 [2024-07-26 13:11:00.631987] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.355 Base_1 00:11:20.355 Base_2 00:11:20.355 13:11:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:11:20.355 13:11:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:11:20.355 13:11:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:11:20.612 13:11:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:11:20.612 13:11:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:11:20.612 13:11:00 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:11:20.612 13:11:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:20.612 13:11:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:11:20.612 13:11:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:20.612 13:11:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:20.612 13:11:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:20.612 13:11:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:11:20.612 13:11:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:20.612 13:11:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.612 13:11:00 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:11:20.612 [2024-07-26 13:11:01.099444] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xebd9b0 00:11:20.612 /dev/nbd0 00:11:20.612 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.869 1+0 records in 00:11:20.869 1+0 records out 00:11:20.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210511 s, 19.5 MB/s 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:20.869 { 00:11:20.869 "nbd_device": "/dev/nbd0", 00:11:20.869 "bdev_name": "raid" 00:11:20.869 } 00:11:20.869 ]' 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:20.869 { 00:11:20.869 "nbd_device": "/dev/nbd0", 00:11:20.869 "bdev_name": "raid" 00:11:20.869 } 00:11:20.869 ]' 00:11:20.869 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:11:21.126 4096+0 records in 00:11:21.126 4096+0 records out 00:11:21.126 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0302337 s, 69.4 MB/s 00:11:21.126 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:11:21.385 4096+0 records in 00:11:21.385 4096+0 records out 00:11:21.385 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.182498 s, 11.5 MB/s 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:11:21.385 128+0 records in 00:11:21.385 128+0 records out 00:11:21.385 65536 bytes (66 kB, 64 KiB) copied, 0.000837817 s, 78.2 MB/s 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:11:21.385 2035+0 records in 00:11:21.385 2035+0 records out 00:11:21.385 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0119295 s, 87.3 MB/s 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:11:21.385 456+0 records in 00:11:21.385 456+0 records out 00:11:21.385 233472 bytes (233 kB, 228 KiB) copied, 0.00272096 s, 85.8 MB/s 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:21.385 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:21.386 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:21.386 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:11:21.386 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.386 13:11:01 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:11:21.643 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:21.643 [2024-07-26 13:11:02.013842] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.643 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:21.643 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:21.643 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.643 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.643 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:21.643 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:11:21.643 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.643 13:11:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:11:21.643 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:11:21.643 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 651702 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 651702 ']' 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 651702 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 651702 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 651702' 00:11:21.902 killing process with pid 651702 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 651702 00:11:21.902 [2024-07-26 13:11:02.365117] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.902 [2024-07-26 13:11:02.365183] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.902 [2024-07-26 13:11:02.365225] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.902 [2024-07-26 13:11:02.365236] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x107aa50 name raid, state offline 00:11:21.902 13:11:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 651702 00:11:21.902 [2024-07-26 13:11:02.380611] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.160 13:11:02 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:11:22.160 00:11:22.160 real 0m3.414s 00:11:22.160 user 0m4.706s 00:11:22.160 sys 0m1.191s 00:11:22.160 13:11:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.160 13:11:02 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:11:22.160 ************************************ 00:11:22.160 END TEST raid_function_test_concat 00:11:22.160 ************************************ 00:11:22.160 13:11:02 bdev_raid -- bdev/bdev_raid.sh@942 -- # run_test raid0_resize_test raid_resize_test 0 00:11:22.160 13:11:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:22.160 13:11:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.160 13:11:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.160 ************************************ 00:11:22.160 START TEST raid0_resize_test 00:11:22.160 ************************************ 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=0 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=652323 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 652323' 00:11:22.160 Process raid pid: 652323 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 652323 /var/tmp/spdk-raid.sock 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 652323 ']' 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:22.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.160 13:11:02 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.418 [2024-07-26 13:11:02.714164] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:11:22.418 [2024-07-26 13:11:02.714220] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.418 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:22.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.418 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:22.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.418 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:22.418 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.418 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:22.419 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:22.419 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:22.419 [2024-07-26 13:11:02.847874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.419 [2024-07-26 13:11:02.935250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.677 [2024-07-26 13:11:02.995929] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.677 [2024-07-26 13:11:02.995964] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.243 13:11:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.243 13:11:03 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:11:23.243 13:11:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:11:23.501 Base_1 00:11:23.502 13:11:03 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:11:23.760 Base_2 00:11:23.760 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 0 -eq 0 ']' 00:11:23.760 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:11:23.760 [2024-07-26 13:11:04.256792] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:23.760 [2024-07-26 13:11:04.258193] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:23.760 [2024-07-26 13:11:04.258239] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x13b1c80 00:11:23.760 [2024-07-26 13:11:04.258248] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:23.760 [2024-07-26 13:11:04.258436] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xef5030 00:11:23.760 [2024-07-26 13:11:04.258528] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13b1c80 00:11:23.760 [2024-07-26 13:11:04.258537] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x13b1c80 00:11:23.760 [2024-07-26 13:11:04.258631] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.760 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:11:24.060 [2024-07-26 13:11:04.473338] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:24.060 [2024-07-26 13:11:04.473353] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:24.060 true 00:11:24.060 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:24.060 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:11:24.319 [2024-07-26 13:11:04.698076] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.319 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=131072 00:11:24.319 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=64 00:11:24.319 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 0 -eq 0 ']' 00:11:24.319 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # expected_size=64 00:11:24.319 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 64 '!=' 64 ']' 00:11:24.319 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:11:24.579 [2024-07-26 13:11:04.922498] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:24.579 [2024-07-26 13:11:04.922516] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:24.579 [2024-07-26 13:11:04.922540] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:11:24.579 true 00:11:24.579 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:24.579 13:11:04 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:11:24.839 [2024-07-26 13:11:05.147239] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=262144 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=128 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 0 -eq 0 ']' 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@393 -- # expected_size=128 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 128 '!=' 128 ']' 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 652323 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 652323 ']' 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 652323 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 652323 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 652323' 00:11:24.839 killing process with pid 652323 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 652323 00:11:24.839 [2024-07-26 13:11:05.199310] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.839 [2024-07-26 13:11:05.199358] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.839 [2024-07-26 13:11:05.199396] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.839 [2024-07-26 13:11:05.199406] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13b1c80 name Raid, state offline 00:11:24.839 13:11:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 652323 00:11:24.839 [2024-07-26 13:11:05.200576] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:25.098 13:11:05 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:11:25.098 00:11:25.098 real 0m2.718s 00:11:25.098 user 0m4.165s 00:11:25.098 sys 0m0.585s 00:11:25.098 13:11:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.098 13:11:05 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.098 ************************************ 00:11:25.098 END TEST raid0_resize_test 00:11:25.098 ************************************ 00:11:25.098 13:11:05 bdev_raid -- bdev/bdev_raid.sh@943 -- # run_test raid1_resize_test raid_resize_test 1 00:11:25.098 13:11:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:25.098 13:11:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.098 13:11:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:25.098 ************************************ 00:11:25.098 START TEST raid1_resize_test 00:11:25.098 ************************************ 00:11:25.098 13:11:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=1 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=652881 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 652881' 00:11:25.099 Process raid pid: 652881 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 652881 /var/tmp/spdk-raid.sock 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 652881 ']' 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:25.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:25.099 13:11:05 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.099 [2024-07-26 13:11:05.508318] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:11:25.099 [2024-07-26 13:11:05.508374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:25.099 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:25.099 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:25.358 [2024-07-26 13:11:05.627466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.358 [2024-07-26 13:11:05.712931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.358 [2024-07-26 13:11:05.776692] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.358 [2024-07-26 13:11:05.776727] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.927 13:11:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:25.927 13:11:06 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:11:25.927 13:11:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:11:26.186 Base_1 00:11:26.186 13:11:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:11:26.445 Base_2 00:11:26.445 13:11:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 1 -eq 0 ']' 00:11:26.445 13:11:06 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@367 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r 1 -b 'Base_1 Base_2' -n Raid 00:11:26.705 [2024-07-26 13:11:07.065570] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:11:26.705 [2024-07-26 13:11:07.066998] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:11:26.705 [2024-07-26 13:11:07.067053] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ed9c80 00:11:26.705 [2024-07-26 13:11:07.067063] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:26.705 [2024-07-26 13:11:07.067263] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a1d030 00:11:26.705 [2024-07-26 13:11:07.067353] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ed9c80 00:11:26.705 [2024-07-26 13:11:07.067363] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x1ed9c80 00:11:26.705 [2024-07-26 13:11:07.067460] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.705 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:11:26.965 [2024-07-26 13:11:07.290147] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:26.965 [2024-07-26 13:11:07.290164] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:11:26.965 true 00:11:26.965 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:11:26.965 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:26.965 [2024-07-26 13:11:07.454726] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.965 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=65536 00:11:26.965 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=32 00:11:26.965 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 1 -eq 0 ']' 00:11:26.965 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@379 -- # expected_size=32 00:11:26.965 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 32 '!=' 32 ']' 00:11:26.965 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:11:27.225 [2024-07-26 13:11:07.675160] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:11:27.225 [2024-07-26 13:11:07.675181] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:11:27.225 [2024-07-26 13:11:07.675205] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:11:27.225 true 00:11:27.225 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:11:27.225 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:11:27.484 [2024-07-26 13:11:07.899879] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.484 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=131072 00:11:27.484 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=64 00:11:27.484 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 1 -eq 0 ']' 00:11:27.484 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@395 -- # expected_size=64 00:11:27.484 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 64 '!=' 64 ']' 00:11:27.484 13:11:07 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 652881 00:11:27.485 13:11:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 652881 ']' 00:11:27.485 13:11:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 652881 00:11:27.485 13:11:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:11:27.485 13:11:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.485 13:11:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 652881 00:11:27.485 13:11:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:27.485 13:11:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:27.485 13:11:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 652881' 00:11:27.485 killing process with pid 652881 00:11:27.485 13:11:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 652881 00:11:27.485 [2024-07-26 13:11:07.977239] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:27.485 [2024-07-26 13:11:07.977291] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.485 13:11:07 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 652881 00:11:27.485 [2024-07-26 13:11:07.977610] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.485 [2024-07-26 13:11:07.977622] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ed9c80 name Raid, state offline 00:11:27.485 [2024-07-26 13:11:07.978536] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.744 13:11:08 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:11:27.744 00:11:27.744 real 0m2.703s 00:11:27.744 user 0m4.108s 00:11:27.744 sys 0m0.586s 00:11:27.744 13:11:08 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.744 13:11:08 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.744 ************************************ 00:11:27.744 END TEST raid1_resize_test 00:11:27.744 ************************************ 00:11:27.744 13:11:08 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:11:27.744 13:11:08 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:11:27.744 13:11:08 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:11:27.745 13:11:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:27.745 13:11:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.745 13:11:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.745 ************************************ 00:11:27.745 START TEST raid_state_function_test 00:11:27.745 ************************************ 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=653442 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 653442' 00:11:27.745 Process raid pid: 653442 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 653442 /var/tmp/spdk-raid.sock 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 653442 ']' 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:27.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.745 13:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:28.005 [2024-07-26 13:11:08.282119] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:11:28.005 [2024-07-26 13:11:08.282184] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:28.005 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:28.005 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:28.005 [2024-07-26 13:11:08.413415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.005 [2024-07-26 13:11:08.499010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.264 [2024-07-26 13:11:08.562148] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.264 [2024-07-26 13:11:08.562182] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:28.834 [2024-07-26 13:11:09.307872] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.834 [2024-07-26 13:11:09.307909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.834 [2024-07-26 13:11:09.307920] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.834 [2024-07-26 13:11:09.307931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.834 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.093 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.093 "name": "Existed_Raid", 00:11:29.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.093 "strip_size_kb": 64, 00:11:29.093 "state": "configuring", 00:11:29.093 "raid_level": "raid0", 00:11:29.093 "superblock": false, 00:11:29.093 "num_base_bdevs": 2, 00:11:29.093 "num_base_bdevs_discovered": 0, 00:11:29.093 "num_base_bdevs_operational": 2, 00:11:29.093 "base_bdevs_list": [ 00:11:29.093 { 00:11:29.093 "name": "BaseBdev1", 00:11:29.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.093 "is_configured": false, 00:11:29.093 "data_offset": 0, 00:11:29.093 "data_size": 0 00:11:29.093 }, 00:11:29.093 { 00:11:29.093 "name": "BaseBdev2", 00:11:29.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.093 "is_configured": false, 00:11:29.093 "data_offset": 0, 00:11:29.093 "data_size": 0 00:11:29.093 } 00:11:29.093 ] 00:11:29.093 }' 00:11:29.093 13:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.093 13:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.033 13:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:30.303 [2024-07-26 13:11:10.591093] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.303 [2024-07-26 13:11:10.591125] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa1af20 name Existed_Raid, state configuring 00:11:30.303 13:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:30.303 [2024-07-26 13:11:10.811686] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:30.303 [2024-07-26 13:11:10.811717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:30.303 [2024-07-26 13:11:10.811726] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:30.303 [2024-07-26 13:11:10.811737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:30.562 13:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:30.562 [2024-07-26 13:11:11.049756] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.562 BaseBdev1 00:11:30.562 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:30.562 13:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:30.562 13:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:30.562 13:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:30.562 13:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:30.562 13:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:30.562 13:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:30.821 13:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.081 [ 00:11:31.081 { 00:11:31.081 "name": "BaseBdev1", 00:11:31.081 "aliases": [ 00:11:31.081 "383e507b-5179-4433-9e01-a499b47b1a2f" 00:11:31.081 ], 00:11:31.081 "product_name": "Malloc disk", 00:11:31.081 "block_size": 512, 00:11:31.081 "num_blocks": 65536, 00:11:31.081 "uuid": "383e507b-5179-4433-9e01-a499b47b1a2f", 00:11:31.081 "assigned_rate_limits": { 00:11:31.081 "rw_ios_per_sec": 0, 00:11:31.081 "rw_mbytes_per_sec": 0, 00:11:31.081 "r_mbytes_per_sec": 0, 00:11:31.081 "w_mbytes_per_sec": 0 00:11:31.081 }, 00:11:31.081 "claimed": true, 00:11:31.081 "claim_type": "exclusive_write", 00:11:31.081 "zoned": false, 00:11:31.081 "supported_io_types": { 00:11:31.081 "read": true, 00:11:31.081 "write": true, 00:11:31.081 "unmap": true, 00:11:31.081 "flush": true, 00:11:31.081 "reset": true, 00:11:31.081 "nvme_admin": false, 00:11:31.081 "nvme_io": false, 00:11:31.081 "nvme_io_md": false, 00:11:31.081 "write_zeroes": true, 00:11:31.081 "zcopy": true, 00:11:31.081 "get_zone_info": false, 00:11:31.081 "zone_management": false, 00:11:31.081 "zone_append": false, 00:11:31.081 "compare": false, 00:11:31.081 "compare_and_write": false, 00:11:31.081 "abort": true, 00:11:31.081 "seek_hole": false, 00:11:31.081 "seek_data": false, 00:11:31.081 "copy": true, 00:11:31.081 "nvme_iov_md": false 00:11:31.081 }, 00:11:31.081 "memory_domains": [ 00:11:31.081 { 00:11:31.081 "dma_device_id": "system", 00:11:31.081 "dma_device_type": 1 00:11:31.081 }, 00:11:31.081 { 00:11:31.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.081 "dma_device_type": 2 00:11:31.081 } 00:11:31.081 ], 00:11:31.081 "driver_specific": {} 00:11:31.081 } 00:11:31.081 ] 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.081 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.340 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.340 "name": "Existed_Raid", 00:11:31.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.340 "strip_size_kb": 64, 00:11:31.340 "state": "configuring", 00:11:31.340 "raid_level": "raid0", 00:11:31.340 "superblock": false, 00:11:31.340 "num_base_bdevs": 2, 00:11:31.340 "num_base_bdevs_discovered": 1, 00:11:31.340 "num_base_bdevs_operational": 2, 00:11:31.340 "base_bdevs_list": [ 00:11:31.340 { 00:11:31.340 "name": "BaseBdev1", 00:11:31.340 "uuid": "383e507b-5179-4433-9e01-a499b47b1a2f", 00:11:31.340 "is_configured": true, 00:11:31.340 "data_offset": 0, 00:11:31.340 "data_size": 65536 00:11:31.340 }, 00:11:31.340 { 00:11:31.340 "name": "BaseBdev2", 00:11:31.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.340 "is_configured": false, 00:11:31.340 "data_offset": 0, 00:11:31.340 "data_size": 0 00:11:31.340 } 00:11:31.340 ] 00:11:31.340 }' 00:11:31.340 13:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.340 13:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.909 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:32.167 [2024-07-26 13:11:12.485532] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.167 [2024-07-26 13:11:12.485567] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa1a810 name Existed_Raid, state configuring 00:11:32.167 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:32.424 [2024-07-26 13:11:12.718174] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.424 [2024-07-26 13:11:12.719534] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.424 [2024-07-26 13:11:12.719565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.424 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:32.424 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:32.424 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:32.424 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:32.424 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:32.425 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:32.425 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:32.425 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:32.425 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:32.425 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:32.425 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:32.425 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:32.425 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.425 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.682 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:32.682 "name": "Existed_Raid", 00:11:32.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.682 "strip_size_kb": 64, 00:11:32.682 "state": "configuring", 00:11:32.682 "raid_level": "raid0", 00:11:32.682 "superblock": false, 00:11:32.682 "num_base_bdevs": 2, 00:11:32.682 "num_base_bdevs_discovered": 1, 00:11:32.682 "num_base_bdevs_operational": 2, 00:11:32.682 "base_bdevs_list": [ 00:11:32.682 { 00:11:32.682 "name": "BaseBdev1", 00:11:32.682 "uuid": "383e507b-5179-4433-9e01-a499b47b1a2f", 00:11:32.682 "is_configured": true, 00:11:32.682 "data_offset": 0, 00:11:32.682 "data_size": 65536 00:11:32.682 }, 00:11:32.682 { 00:11:32.682 "name": "BaseBdev2", 00:11:32.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.682 "is_configured": false, 00:11:32.682 "data_offset": 0, 00:11:32.682 "data_size": 0 00:11:32.682 } 00:11:32.682 ] 00:11:32.682 }' 00:11:32.682 13:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:32.682 13:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.248 13:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:33.248 [2024-07-26 13:11:13.743970] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.248 [2024-07-26 13:11:13.744001] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xa1b610 00:11:33.248 [2024-07-26 13:11:13.744008] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:33.248 [2024-07-26 13:11:13.744256] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbbf130 00:11:33.248 [2024-07-26 13:11:13.744388] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa1b610 00:11:33.248 [2024-07-26 13:11:13.744398] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xa1b610 00:11:33.248 [2024-07-26 13:11:13.744548] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.248 BaseBdev2 00:11:33.248 13:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:33.248 13:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:33.248 13:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:33.248 13:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:33.248 13:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:33.248 13:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:33.248 13:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:33.507 13:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:33.766 [ 00:11:33.766 { 00:11:33.766 "name": "BaseBdev2", 00:11:33.766 "aliases": [ 00:11:33.766 "91a133c6-3f5b-44dc-901a-8bdaf24ea869" 00:11:33.766 ], 00:11:33.766 "product_name": "Malloc disk", 00:11:33.766 "block_size": 512, 00:11:33.766 "num_blocks": 65536, 00:11:33.766 "uuid": "91a133c6-3f5b-44dc-901a-8bdaf24ea869", 00:11:33.766 "assigned_rate_limits": { 00:11:33.766 "rw_ios_per_sec": 0, 00:11:33.766 "rw_mbytes_per_sec": 0, 00:11:33.766 "r_mbytes_per_sec": 0, 00:11:33.766 "w_mbytes_per_sec": 0 00:11:33.766 }, 00:11:33.766 "claimed": true, 00:11:33.766 "claim_type": "exclusive_write", 00:11:33.766 "zoned": false, 00:11:33.766 "supported_io_types": { 00:11:33.766 "read": true, 00:11:33.766 "write": true, 00:11:33.766 "unmap": true, 00:11:33.766 "flush": true, 00:11:33.766 "reset": true, 00:11:33.766 "nvme_admin": false, 00:11:33.766 "nvme_io": false, 00:11:33.766 "nvme_io_md": false, 00:11:33.766 "write_zeroes": true, 00:11:33.766 "zcopy": true, 00:11:33.766 "get_zone_info": false, 00:11:33.766 "zone_management": false, 00:11:33.766 "zone_append": false, 00:11:33.766 "compare": false, 00:11:33.766 "compare_and_write": false, 00:11:33.766 "abort": true, 00:11:33.766 "seek_hole": false, 00:11:33.766 "seek_data": false, 00:11:33.766 "copy": true, 00:11:33.766 "nvme_iov_md": false 00:11:33.766 }, 00:11:33.766 "memory_domains": [ 00:11:33.766 { 00:11:33.766 "dma_device_id": "system", 00:11:33.766 "dma_device_type": 1 00:11:33.766 }, 00:11:33.766 { 00:11:33.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.766 "dma_device_type": 2 00:11:33.766 } 00:11:33.766 ], 00:11:33.766 "driver_specific": {} 00:11:33.766 } 00:11:33.766 ] 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.767 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.026 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.026 "name": "Existed_Raid", 00:11:34.026 "uuid": "f1b542df-d34e-4c59-ade0-b30b03b33dfd", 00:11:34.026 "strip_size_kb": 64, 00:11:34.026 "state": "online", 00:11:34.026 "raid_level": "raid0", 00:11:34.026 "superblock": false, 00:11:34.026 "num_base_bdevs": 2, 00:11:34.026 "num_base_bdevs_discovered": 2, 00:11:34.026 "num_base_bdevs_operational": 2, 00:11:34.026 "base_bdevs_list": [ 00:11:34.026 { 00:11:34.026 "name": "BaseBdev1", 00:11:34.026 "uuid": "383e507b-5179-4433-9e01-a499b47b1a2f", 00:11:34.026 "is_configured": true, 00:11:34.026 "data_offset": 0, 00:11:34.026 "data_size": 65536 00:11:34.026 }, 00:11:34.026 { 00:11:34.026 "name": "BaseBdev2", 00:11:34.026 "uuid": "91a133c6-3f5b-44dc-901a-8bdaf24ea869", 00:11:34.026 "is_configured": true, 00:11:34.026 "data_offset": 0, 00:11:34.026 "data_size": 65536 00:11:34.026 } 00:11:34.026 ] 00:11:34.026 }' 00:11:34.026 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.026 13:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.594 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:34.594 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:34.594 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:34.594 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:34.594 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:34.594 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:34.594 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:34.594 13:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:34.852 [2024-07-26 13:11:15.200072] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.852 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:34.852 "name": "Existed_Raid", 00:11:34.852 "aliases": [ 00:11:34.852 "f1b542df-d34e-4c59-ade0-b30b03b33dfd" 00:11:34.852 ], 00:11:34.852 "product_name": "Raid Volume", 00:11:34.852 "block_size": 512, 00:11:34.852 "num_blocks": 131072, 00:11:34.852 "uuid": "f1b542df-d34e-4c59-ade0-b30b03b33dfd", 00:11:34.852 "assigned_rate_limits": { 00:11:34.852 "rw_ios_per_sec": 0, 00:11:34.852 "rw_mbytes_per_sec": 0, 00:11:34.852 "r_mbytes_per_sec": 0, 00:11:34.852 "w_mbytes_per_sec": 0 00:11:34.852 }, 00:11:34.852 "claimed": false, 00:11:34.852 "zoned": false, 00:11:34.852 "supported_io_types": { 00:11:34.852 "read": true, 00:11:34.852 "write": true, 00:11:34.852 "unmap": true, 00:11:34.852 "flush": true, 00:11:34.852 "reset": true, 00:11:34.852 "nvme_admin": false, 00:11:34.852 "nvme_io": false, 00:11:34.852 "nvme_io_md": false, 00:11:34.852 "write_zeroes": true, 00:11:34.852 "zcopy": false, 00:11:34.852 "get_zone_info": false, 00:11:34.852 "zone_management": false, 00:11:34.852 "zone_append": false, 00:11:34.852 "compare": false, 00:11:34.852 "compare_and_write": false, 00:11:34.852 "abort": false, 00:11:34.852 "seek_hole": false, 00:11:34.852 "seek_data": false, 00:11:34.852 "copy": false, 00:11:34.852 "nvme_iov_md": false 00:11:34.852 }, 00:11:34.852 "memory_domains": [ 00:11:34.852 { 00:11:34.852 "dma_device_id": "system", 00:11:34.852 "dma_device_type": 1 00:11:34.852 }, 00:11:34.852 { 00:11:34.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.852 "dma_device_type": 2 00:11:34.852 }, 00:11:34.852 { 00:11:34.852 "dma_device_id": "system", 00:11:34.852 "dma_device_type": 1 00:11:34.852 }, 00:11:34.852 { 00:11:34.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.852 "dma_device_type": 2 00:11:34.852 } 00:11:34.852 ], 00:11:34.852 "driver_specific": { 00:11:34.852 "raid": { 00:11:34.852 "uuid": "f1b542df-d34e-4c59-ade0-b30b03b33dfd", 00:11:34.852 "strip_size_kb": 64, 00:11:34.852 "state": "online", 00:11:34.852 "raid_level": "raid0", 00:11:34.852 "superblock": false, 00:11:34.852 "num_base_bdevs": 2, 00:11:34.852 "num_base_bdevs_discovered": 2, 00:11:34.852 "num_base_bdevs_operational": 2, 00:11:34.852 "base_bdevs_list": [ 00:11:34.852 { 00:11:34.852 "name": "BaseBdev1", 00:11:34.852 "uuid": "383e507b-5179-4433-9e01-a499b47b1a2f", 00:11:34.852 "is_configured": true, 00:11:34.852 "data_offset": 0, 00:11:34.852 "data_size": 65536 00:11:34.852 }, 00:11:34.852 { 00:11:34.852 "name": "BaseBdev2", 00:11:34.852 "uuid": "91a133c6-3f5b-44dc-901a-8bdaf24ea869", 00:11:34.852 "is_configured": true, 00:11:34.852 "data_offset": 0, 00:11:34.852 "data_size": 65536 00:11:34.852 } 00:11:34.852 ] 00:11:34.852 } 00:11:34.852 } 00:11:34.852 }' 00:11:34.852 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.852 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:34.852 BaseBdev2' 00:11:34.852 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:34.852 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:34.852 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:35.111 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:35.111 "name": "BaseBdev1", 00:11:35.111 "aliases": [ 00:11:35.111 "383e507b-5179-4433-9e01-a499b47b1a2f" 00:11:35.111 ], 00:11:35.111 "product_name": "Malloc disk", 00:11:35.111 "block_size": 512, 00:11:35.111 "num_blocks": 65536, 00:11:35.111 "uuid": "383e507b-5179-4433-9e01-a499b47b1a2f", 00:11:35.111 "assigned_rate_limits": { 00:11:35.111 "rw_ios_per_sec": 0, 00:11:35.111 "rw_mbytes_per_sec": 0, 00:11:35.111 "r_mbytes_per_sec": 0, 00:11:35.111 "w_mbytes_per_sec": 0 00:11:35.111 }, 00:11:35.111 "claimed": true, 00:11:35.111 "claim_type": "exclusive_write", 00:11:35.111 "zoned": false, 00:11:35.111 "supported_io_types": { 00:11:35.111 "read": true, 00:11:35.111 "write": true, 00:11:35.111 "unmap": true, 00:11:35.111 "flush": true, 00:11:35.111 "reset": true, 00:11:35.111 "nvme_admin": false, 00:11:35.111 "nvme_io": false, 00:11:35.111 "nvme_io_md": false, 00:11:35.111 "write_zeroes": true, 00:11:35.111 "zcopy": true, 00:11:35.111 "get_zone_info": false, 00:11:35.111 "zone_management": false, 00:11:35.111 "zone_append": false, 00:11:35.111 "compare": false, 00:11:35.111 "compare_and_write": false, 00:11:35.111 "abort": true, 00:11:35.111 "seek_hole": false, 00:11:35.111 "seek_data": false, 00:11:35.111 "copy": true, 00:11:35.111 "nvme_iov_md": false 00:11:35.111 }, 00:11:35.111 "memory_domains": [ 00:11:35.111 { 00:11:35.111 "dma_device_id": "system", 00:11:35.111 "dma_device_type": 1 00:11:35.111 }, 00:11:35.111 { 00:11:35.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.111 "dma_device_type": 2 00:11:35.111 } 00:11:35.111 ], 00:11:35.111 "driver_specific": {} 00:11:35.111 }' 00:11:35.111 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.111 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.111 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:35.111 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.111 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.370 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:35.370 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.370 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.370 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:35.370 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.370 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.370 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:35.370 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:35.370 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:35.370 13:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:35.628 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:35.628 "name": "BaseBdev2", 00:11:35.628 "aliases": [ 00:11:35.628 "91a133c6-3f5b-44dc-901a-8bdaf24ea869" 00:11:35.628 ], 00:11:35.628 "product_name": "Malloc disk", 00:11:35.628 "block_size": 512, 00:11:35.628 "num_blocks": 65536, 00:11:35.628 "uuid": "91a133c6-3f5b-44dc-901a-8bdaf24ea869", 00:11:35.628 "assigned_rate_limits": { 00:11:35.628 "rw_ios_per_sec": 0, 00:11:35.628 "rw_mbytes_per_sec": 0, 00:11:35.628 "r_mbytes_per_sec": 0, 00:11:35.628 "w_mbytes_per_sec": 0 00:11:35.628 }, 00:11:35.628 "claimed": true, 00:11:35.628 "claim_type": "exclusive_write", 00:11:35.628 "zoned": false, 00:11:35.628 "supported_io_types": { 00:11:35.628 "read": true, 00:11:35.628 "write": true, 00:11:35.628 "unmap": true, 00:11:35.628 "flush": true, 00:11:35.628 "reset": true, 00:11:35.628 "nvme_admin": false, 00:11:35.628 "nvme_io": false, 00:11:35.628 "nvme_io_md": false, 00:11:35.628 "write_zeroes": true, 00:11:35.628 "zcopy": true, 00:11:35.628 "get_zone_info": false, 00:11:35.628 "zone_management": false, 00:11:35.628 "zone_append": false, 00:11:35.628 "compare": false, 00:11:35.628 "compare_and_write": false, 00:11:35.628 "abort": true, 00:11:35.628 "seek_hole": false, 00:11:35.628 "seek_data": false, 00:11:35.628 "copy": true, 00:11:35.628 "nvme_iov_md": false 00:11:35.628 }, 00:11:35.628 "memory_domains": [ 00:11:35.628 { 00:11:35.628 "dma_device_id": "system", 00:11:35.628 "dma_device_type": 1 00:11:35.628 }, 00:11:35.628 { 00:11:35.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.628 "dma_device_type": 2 00:11:35.628 } 00:11:35.628 ], 00:11:35.628 "driver_specific": {} 00:11:35.628 }' 00:11:35.628 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.628 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:35.628 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:35.628 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.887 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:35.887 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:35.887 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.887 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:35.887 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:35.887 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.887 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:35.887 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:35.887 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:36.146 [2024-07-26 13:11:16.611606] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.146 [2024-07-26 13:11:16.611629] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.146 [2024-07-26 13:11:16.611668] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.146 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.405 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:36.405 "name": "Existed_Raid", 00:11:36.405 "uuid": "f1b542df-d34e-4c59-ade0-b30b03b33dfd", 00:11:36.405 "strip_size_kb": 64, 00:11:36.405 "state": "offline", 00:11:36.405 "raid_level": "raid0", 00:11:36.405 "superblock": false, 00:11:36.405 "num_base_bdevs": 2, 00:11:36.405 "num_base_bdevs_discovered": 1, 00:11:36.405 "num_base_bdevs_operational": 1, 00:11:36.405 "base_bdevs_list": [ 00:11:36.405 { 00:11:36.405 "name": null, 00:11:36.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.405 "is_configured": false, 00:11:36.405 "data_offset": 0, 00:11:36.405 "data_size": 65536 00:11:36.405 }, 00:11:36.405 { 00:11:36.405 "name": "BaseBdev2", 00:11:36.405 "uuid": "91a133c6-3f5b-44dc-901a-8bdaf24ea869", 00:11:36.405 "is_configured": true, 00:11:36.405 "data_offset": 0, 00:11:36.405 "data_size": 65536 00:11:36.405 } 00:11:36.405 ] 00:11:36.405 }' 00:11:36.405 13:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:36.405 13:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.974 13:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:36.974 13:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:36.974 13:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.974 13:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:37.268 13:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:37.268 13:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:37.268 13:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:37.527 [2024-07-26 13:11:17.855905] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:37.527 [2024-07-26 13:11:17.855950] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa1b610 name Existed_Raid, state offline 00:11:37.527 13:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:37.527 13:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:37.527 13:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.527 13:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.786 13:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:37.786 13:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:37.787 13:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:37.787 13:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 653442 00:11:37.787 13:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 653442 ']' 00:11:37.787 13:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 653442 00:11:37.787 13:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:37.787 13:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.787 13:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 653442 00:11:37.787 13:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:37.787 13:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:37.787 13:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 653442' 00:11:37.787 killing process with pid 653442 00:11:37.787 13:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 653442 00:11:37.787 [2024-07-26 13:11:18.170115] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.787 13:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 653442 00:11:37.787 [2024-07-26 13:11:18.170960] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.046 13:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:38.046 00:11:38.046 real 0m10.137s 00:11:38.046 user 0m18.007s 00:11:38.046 sys 0m1.904s 00:11:38.046 13:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.046 13:11:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.046 ************************************ 00:11:38.046 END TEST raid_state_function_test 00:11:38.046 ************************************ 00:11:38.046 13:11:18 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:11:38.046 13:11:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:38.046 13:11:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:38.046 13:11:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.046 ************************************ 00:11:38.046 START TEST raid_state_function_test_sb 00:11:38.046 ************************************ 00:11:38.046 13:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:11:38.046 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:11:38.046 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:11:38.046 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:38.046 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:38.046 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=655445 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 655445' 00:11:38.047 Process raid pid: 655445 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 655445 /var/tmp/spdk-raid.sock 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 655445 ']' 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:38.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:38.047 13:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.047 [2024-07-26 13:11:18.509604] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:11:38.047 [2024-07-26 13:11:18.509662] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:38.307 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:38.307 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:38.307 [2024-07-26 13:11:18.643209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.307 [2024-07-26 13:11:18.729368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.307 [2024-07-26 13:11:18.788295] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.307 [2024-07-26 13:11:18.788329] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:39.245 [2024-07-26 13:11:19.620213] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:39.245 [2024-07-26 13:11:19.620250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:39.245 [2024-07-26 13:11:19.620260] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:39.245 [2024-07-26 13:11:19.620271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.245 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.246 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.505 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.505 "name": "Existed_Raid", 00:11:39.505 "uuid": "b4c729d6-344b-4c7c-9040-11331822c19b", 00:11:39.505 "strip_size_kb": 64, 00:11:39.505 "state": "configuring", 00:11:39.505 "raid_level": "raid0", 00:11:39.505 "superblock": true, 00:11:39.505 "num_base_bdevs": 2, 00:11:39.505 "num_base_bdevs_discovered": 0, 00:11:39.505 "num_base_bdevs_operational": 2, 00:11:39.505 "base_bdevs_list": [ 00:11:39.505 { 00:11:39.505 "name": "BaseBdev1", 00:11:39.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.505 "is_configured": false, 00:11:39.505 "data_offset": 0, 00:11:39.505 "data_size": 0 00:11:39.505 }, 00:11:39.505 { 00:11:39.505 "name": "BaseBdev2", 00:11:39.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.505 "is_configured": false, 00:11:39.505 "data_offset": 0, 00:11:39.505 "data_size": 0 00:11:39.505 } 00:11:39.505 ] 00:11:39.505 }' 00:11:39.505 13:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.505 13:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.071 13:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:40.071 [2024-07-26 13:11:20.582640] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:40.071 [2024-07-26 13:11:20.582674] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f03f20 name Existed_Raid, state configuring 00:11:40.329 13:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:40.329 [2024-07-26 13:11:20.755115] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.329 [2024-07-26 13:11:20.755145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.329 [2024-07-26 13:11:20.755154] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.329 [2024-07-26 13:11:20.755165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.329 13:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:40.587 [2024-07-26 13:11:20.933314] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.587 BaseBdev1 00:11:40.587 13:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:40.587 13:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:40.587 13:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:40.587 13:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:40.587 13:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:40.587 13:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:40.587 13:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:40.587 13:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:40.846 [ 00:11:40.846 { 00:11:40.846 "name": "BaseBdev1", 00:11:40.846 "aliases": [ 00:11:40.846 "d4eccade-7809-49d2-96a8-5659e2d91ef2" 00:11:40.846 ], 00:11:40.846 "product_name": "Malloc disk", 00:11:40.846 "block_size": 512, 00:11:40.846 "num_blocks": 65536, 00:11:40.846 "uuid": "d4eccade-7809-49d2-96a8-5659e2d91ef2", 00:11:40.846 "assigned_rate_limits": { 00:11:40.846 "rw_ios_per_sec": 0, 00:11:40.846 "rw_mbytes_per_sec": 0, 00:11:40.846 "r_mbytes_per_sec": 0, 00:11:40.846 "w_mbytes_per_sec": 0 00:11:40.846 }, 00:11:40.846 "claimed": true, 00:11:40.846 "claim_type": "exclusive_write", 00:11:40.846 "zoned": false, 00:11:40.846 "supported_io_types": { 00:11:40.846 "read": true, 00:11:40.846 "write": true, 00:11:40.846 "unmap": true, 00:11:40.846 "flush": true, 00:11:40.846 "reset": true, 00:11:40.846 "nvme_admin": false, 00:11:40.846 "nvme_io": false, 00:11:40.846 "nvme_io_md": false, 00:11:40.846 "write_zeroes": true, 00:11:40.846 "zcopy": true, 00:11:40.846 "get_zone_info": false, 00:11:40.846 "zone_management": false, 00:11:40.846 "zone_append": false, 00:11:40.846 "compare": false, 00:11:40.846 "compare_and_write": false, 00:11:40.846 "abort": true, 00:11:40.846 "seek_hole": false, 00:11:40.846 "seek_data": false, 00:11:40.846 "copy": true, 00:11:40.846 "nvme_iov_md": false 00:11:40.846 }, 00:11:40.846 "memory_domains": [ 00:11:40.846 { 00:11:40.846 "dma_device_id": "system", 00:11:40.846 "dma_device_type": 1 00:11:40.846 }, 00:11:40.846 { 00:11:40.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.846 "dma_device_type": 2 00:11:40.846 } 00:11:40.846 ], 00:11:40.846 "driver_specific": {} 00:11:40.846 } 00:11:40.846 ] 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.846 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.104 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.104 "name": "Existed_Raid", 00:11:41.104 "uuid": "32a2d3e1-a769-4e04-aef4-abf0d4f668b7", 00:11:41.104 "strip_size_kb": 64, 00:11:41.104 "state": "configuring", 00:11:41.104 "raid_level": "raid0", 00:11:41.104 "superblock": true, 00:11:41.104 "num_base_bdevs": 2, 00:11:41.104 "num_base_bdevs_discovered": 1, 00:11:41.104 "num_base_bdevs_operational": 2, 00:11:41.104 "base_bdevs_list": [ 00:11:41.104 { 00:11:41.104 "name": "BaseBdev1", 00:11:41.104 "uuid": "d4eccade-7809-49d2-96a8-5659e2d91ef2", 00:11:41.104 "is_configured": true, 00:11:41.104 "data_offset": 2048, 00:11:41.104 "data_size": 63488 00:11:41.104 }, 00:11:41.104 { 00:11:41.104 "name": "BaseBdev2", 00:11:41.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.104 "is_configured": false, 00:11:41.104 "data_offset": 0, 00:11:41.104 "data_size": 0 00:11:41.104 } 00:11:41.104 ] 00:11:41.104 }' 00:11:41.104 13:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.104 13:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.671 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:41.929 [2024-07-26 13:11:22.224704] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.929 [2024-07-26 13:11:22.224737] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f03810 name Existed_Raid, state configuring 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:11:41.929 [2024-07-26 13:11:22.393191] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.929 [2024-07-26 13:11:22.394536] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.929 [2024-07-26 13:11:22.394567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.929 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.187 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:42.187 "name": "Existed_Raid", 00:11:42.187 "uuid": "474ac03a-10ef-4020-96ec-2b9417933c7b", 00:11:42.188 "strip_size_kb": 64, 00:11:42.188 "state": "configuring", 00:11:42.188 "raid_level": "raid0", 00:11:42.188 "superblock": true, 00:11:42.188 "num_base_bdevs": 2, 00:11:42.188 "num_base_bdevs_discovered": 1, 00:11:42.188 "num_base_bdevs_operational": 2, 00:11:42.188 "base_bdevs_list": [ 00:11:42.188 { 00:11:42.188 "name": "BaseBdev1", 00:11:42.188 "uuid": "d4eccade-7809-49d2-96a8-5659e2d91ef2", 00:11:42.188 "is_configured": true, 00:11:42.188 "data_offset": 2048, 00:11:42.188 "data_size": 63488 00:11:42.188 }, 00:11:42.188 { 00:11:42.188 "name": "BaseBdev2", 00:11:42.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.188 "is_configured": false, 00:11:42.188 "data_offset": 0, 00:11:42.188 "data_size": 0 00:11:42.188 } 00:11:42.188 ] 00:11:42.188 }' 00:11:42.188 13:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:42.188 13:11:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.756 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:43.014 [2024-07-26 13:11:23.354792] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.014 [2024-07-26 13:11:23.354921] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f04610 00:11:43.014 [2024-07-26 13:11:23.354933] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:43.014 [2024-07-26 13:11:23.355090] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ef0690 00:11:43.014 [2024-07-26 13:11:23.355206] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f04610 00:11:43.014 [2024-07-26 13:11:23.355216] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1f04610 00:11:43.014 [2024-07-26 13:11:23.355300] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.014 BaseBdev2 00:11:43.014 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:43.014 13:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:43.014 13:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:43.014 13:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:43.014 13:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:43.014 13:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:43.014 13:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:43.272 13:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:43.531 [ 00:11:43.531 { 00:11:43.531 "name": "BaseBdev2", 00:11:43.531 "aliases": [ 00:11:43.531 "d0888646-e0d7-411b-a9e0-6eec7c316d15" 00:11:43.531 ], 00:11:43.531 "product_name": "Malloc disk", 00:11:43.531 "block_size": 512, 00:11:43.531 "num_blocks": 65536, 00:11:43.531 "uuid": "d0888646-e0d7-411b-a9e0-6eec7c316d15", 00:11:43.531 "assigned_rate_limits": { 00:11:43.531 "rw_ios_per_sec": 0, 00:11:43.531 "rw_mbytes_per_sec": 0, 00:11:43.531 "r_mbytes_per_sec": 0, 00:11:43.531 "w_mbytes_per_sec": 0 00:11:43.531 }, 00:11:43.531 "claimed": true, 00:11:43.531 "claim_type": "exclusive_write", 00:11:43.531 "zoned": false, 00:11:43.531 "supported_io_types": { 00:11:43.531 "read": true, 00:11:43.531 "write": true, 00:11:43.531 "unmap": true, 00:11:43.531 "flush": true, 00:11:43.531 "reset": true, 00:11:43.531 "nvme_admin": false, 00:11:43.531 "nvme_io": false, 00:11:43.531 "nvme_io_md": false, 00:11:43.531 "write_zeroes": true, 00:11:43.531 "zcopy": true, 00:11:43.531 "get_zone_info": false, 00:11:43.531 "zone_management": false, 00:11:43.531 "zone_append": false, 00:11:43.531 "compare": false, 00:11:43.531 "compare_and_write": false, 00:11:43.531 "abort": true, 00:11:43.531 "seek_hole": false, 00:11:43.531 "seek_data": false, 00:11:43.531 "copy": true, 00:11:43.531 "nvme_iov_md": false 00:11:43.531 }, 00:11:43.531 "memory_domains": [ 00:11:43.531 { 00:11:43.531 "dma_device_id": "system", 00:11:43.531 "dma_device_type": 1 00:11:43.531 }, 00:11:43.531 { 00:11:43.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.531 "dma_device_type": 2 00:11:43.531 } 00:11:43.531 ], 00:11:43.531 "driver_specific": {} 00:11:43.531 } 00:11:43.531 ] 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.531 13:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.531 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:43.531 "name": "Existed_Raid", 00:11:43.531 "uuid": "474ac03a-10ef-4020-96ec-2b9417933c7b", 00:11:43.531 "strip_size_kb": 64, 00:11:43.531 "state": "online", 00:11:43.531 "raid_level": "raid0", 00:11:43.531 "superblock": true, 00:11:43.531 "num_base_bdevs": 2, 00:11:43.531 "num_base_bdevs_discovered": 2, 00:11:43.531 "num_base_bdevs_operational": 2, 00:11:43.531 "base_bdevs_list": [ 00:11:43.532 { 00:11:43.532 "name": "BaseBdev1", 00:11:43.532 "uuid": "d4eccade-7809-49d2-96a8-5659e2d91ef2", 00:11:43.532 "is_configured": true, 00:11:43.532 "data_offset": 2048, 00:11:43.532 "data_size": 63488 00:11:43.532 }, 00:11:43.532 { 00:11:43.532 "name": "BaseBdev2", 00:11:43.532 "uuid": "d0888646-e0d7-411b-a9e0-6eec7c316d15", 00:11:43.532 "is_configured": true, 00:11:43.532 "data_offset": 2048, 00:11:43.532 "data_size": 63488 00:11:43.532 } 00:11:43.532 ] 00:11:43.532 }' 00:11:43.532 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:43.532 13:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.100 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.100 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:44.100 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:44.100 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:44.100 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:44.100 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:44.100 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:44.100 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:44.360 [2024-07-26 13:11:24.814876] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.360 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:44.360 "name": "Existed_Raid", 00:11:44.360 "aliases": [ 00:11:44.360 "474ac03a-10ef-4020-96ec-2b9417933c7b" 00:11:44.360 ], 00:11:44.360 "product_name": "Raid Volume", 00:11:44.360 "block_size": 512, 00:11:44.360 "num_blocks": 126976, 00:11:44.360 "uuid": "474ac03a-10ef-4020-96ec-2b9417933c7b", 00:11:44.360 "assigned_rate_limits": { 00:11:44.360 "rw_ios_per_sec": 0, 00:11:44.360 "rw_mbytes_per_sec": 0, 00:11:44.360 "r_mbytes_per_sec": 0, 00:11:44.360 "w_mbytes_per_sec": 0 00:11:44.360 }, 00:11:44.360 "claimed": false, 00:11:44.360 "zoned": false, 00:11:44.360 "supported_io_types": { 00:11:44.360 "read": true, 00:11:44.360 "write": true, 00:11:44.360 "unmap": true, 00:11:44.360 "flush": true, 00:11:44.360 "reset": true, 00:11:44.360 "nvme_admin": false, 00:11:44.360 "nvme_io": false, 00:11:44.360 "nvme_io_md": false, 00:11:44.360 "write_zeroes": true, 00:11:44.360 "zcopy": false, 00:11:44.360 "get_zone_info": false, 00:11:44.360 "zone_management": false, 00:11:44.360 "zone_append": false, 00:11:44.360 "compare": false, 00:11:44.360 "compare_and_write": false, 00:11:44.360 "abort": false, 00:11:44.360 "seek_hole": false, 00:11:44.360 "seek_data": false, 00:11:44.360 "copy": false, 00:11:44.360 "nvme_iov_md": false 00:11:44.360 }, 00:11:44.360 "memory_domains": [ 00:11:44.360 { 00:11:44.360 "dma_device_id": "system", 00:11:44.360 "dma_device_type": 1 00:11:44.360 }, 00:11:44.360 { 00:11:44.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.360 "dma_device_type": 2 00:11:44.360 }, 00:11:44.360 { 00:11:44.360 "dma_device_id": "system", 00:11:44.360 "dma_device_type": 1 00:11:44.360 }, 00:11:44.360 { 00:11:44.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.360 "dma_device_type": 2 00:11:44.360 } 00:11:44.360 ], 00:11:44.360 "driver_specific": { 00:11:44.360 "raid": { 00:11:44.360 "uuid": "474ac03a-10ef-4020-96ec-2b9417933c7b", 00:11:44.360 "strip_size_kb": 64, 00:11:44.360 "state": "online", 00:11:44.360 "raid_level": "raid0", 00:11:44.360 "superblock": true, 00:11:44.360 "num_base_bdevs": 2, 00:11:44.360 "num_base_bdevs_discovered": 2, 00:11:44.360 "num_base_bdevs_operational": 2, 00:11:44.360 "base_bdevs_list": [ 00:11:44.360 { 00:11:44.360 "name": "BaseBdev1", 00:11:44.360 "uuid": "d4eccade-7809-49d2-96a8-5659e2d91ef2", 00:11:44.360 "is_configured": true, 00:11:44.360 "data_offset": 2048, 00:11:44.360 "data_size": 63488 00:11:44.360 }, 00:11:44.360 { 00:11:44.360 "name": "BaseBdev2", 00:11:44.360 "uuid": "d0888646-e0d7-411b-a9e0-6eec7c316d15", 00:11:44.360 "is_configured": true, 00:11:44.360 "data_offset": 2048, 00:11:44.360 "data_size": 63488 00:11:44.360 } 00:11:44.360 ] 00:11:44.360 } 00:11:44.360 } 00:11:44.360 }' 00:11:44.360 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.360 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:44.360 BaseBdev2' 00:11:44.360 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:44.360 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:44.360 13:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:44.620 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:44.620 "name": "BaseBdev1", 00:11:44.620 "aliases": [ 00:11:44.620 "d4eccade-7809-49d2-96a8-5659e2d91ef2" 00:11:44.620 ], 00:11:44.620 "product_name": "Malloc disk", 00:11:44.620 "block_size": 512, 00:11:44.620 "num_blocks": 65536, 00:11:44.620 "uuid": "d4eccade-7809-49d2-96a8-5659e2d91ef2", 00:11:44.620 "assigned_rate_limits": { 00:11:44.620 "rw_ios_per_sec": 0, 00:11:44.620 "rw_mbytes_per_sec": 0, 00:11:44.620 "r_mbytes_per_sec": 0, 00:11:44.620 "w_mbytes_per_sec": 0 00:11:44.620 }, 00:11:44.620 "claimed": true, 00:11:44.620 "claim_type": "exclusive_write", 00:11:44.620 "zoned": false, 00:11:44.620 "supported_io_types": { 00:11:44.620 "read": true, 00:11:44.620 "write": true, 00:11:44.620 "unmap": true, 00:11:44.620 "flush": true, 00:11:44.620 "reset": true, 00:11:44.620 "nvme_admin": false, 00:11:44.620 "nvme_io": false, 00:11:44.620 "nvme_io_md": false, 00:11:44.620 "write_zeroes": true, 00:11:44.620 "zcopy": true, 00:11:44.620 "get_zone_info": false, 00:11:44.620 "zone_management": false, 00:11:44.620 "zone_append": false, 00:11:44.620 "compare": false, 00:11:44.620 "compare_and_write": false, 00:11:44.620 "abort": true, 00:11:44.620 "seek_hole": false, 00:11:44.620 "seek_data": false, 00:11:44.620 "copy": true, 00:11:44.620 "nvme_iov_md": false 00:11:44.620 }, 00:11:44.620 "memory_domains": [ 00:11:44.620 { 00:11:44.620 "dma_device_id": "system", 00:11:44.620 "dma_device_type": 1 00:11:44.620 }, 00:11:44.620 { 00:11:44.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.620 "dma_device_type": 2 00:11:44.620 } 00:11:44.620 ], 00:11:44.620 "driver_specific": {} 00:11:44.620 }' 00:11:44.620 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.879 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:44.879 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:44.879 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:44.879 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:44.879 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:44.879 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:44.879 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:44.879 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:44.879 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.138 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.138 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:45.138 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:45.138 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:45.138 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:45.398 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:45.398 "name": "BaseBdev2", 00:11:45.398 "aliases": [ 00:11:45.398 "d0888646-e0d7-411b-a9e0-6eec7c316d15" 00:11:45.398 ], 00:11:45.398 "product_name": "Malloc disk", 00:11:45.398 "block_size": 512, 00:11:45.398 "num_blocks": 65536, 00:11:45.398 "uuid": "d0888646-e0d7-411b-a9e0-6eec7c316d15", 00:11:45.398 "assigned_rate_limits": { 00:11:45.398 "rw_ios_per_sec": 0, 00:11:45.398 "rw_mbytes_per_sec": 0, 00:11:45.398 "r_mbytes_per_sec": 0, 00:11:45.398 "w_mbytes_per_sec": 0 00:11:45.398 }, 00:11:45.398 "claimed": true, 00:11:45.398 "claim_type": "exclusive_write", 00:11:45.398 "zoned": false, 00:11:45.398 "supported_io_types": { 00:11:45.398 "read": true, 00:11:45.398 "write": true, 00:11:45.398 "unmap": true, 00:11:45.398 "flush": true, 00:11:45.398 "reset": true, 00:11:45.398 "nvme_admin": false, 00:11:45.398 "nvme_io": false, 00:11:45.398 "nvme_io_md": false, 00:11:45.398 "write_zeroes": true, 00:11:45.398 "zcopy": true, 00:11:45.398 "get_zone_info": false, 00:11:45.398 "zone_management": false, 00:11:45.398 "zone_append": false, 00:11:45.398 "compare": false, 00:11:45.398 "compare_and_write": false, 00:11:45.398 "abort": true, 00:11:45.398 "seek_hole": false, 00:11:45.398 "seek_data": false, 00:11:45.398 "copy": true, 00:11:45.398 "nvme_iov_md": false 00:11:45.398 }, 00:11:45.398 "memory_domains": [ 00:11:45.398 { 00:11:45.398 "dma_device_id": "system", 00:11:45.398 "dma_device_type": 1 00:11:45.398 }, 00:11:45.398 { 00:11:45.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.398 "dma_device_type": 2 00:11:45.398 } 00:11:45.398 ], 00:11:45.398 "driver_specific": {} 00:11:45.398 }' 00:11:45.398 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.398 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:45.398 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:45.398 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.398 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:45.398 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:45.398 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.398 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:45.657 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:45.657 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.657 13:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:45.657 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:45.657 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:45.916 [2024-07-26 13:11:26.226400] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:45.916 [2024-07-26 13:11:26.226425] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.916 [2024-07-26 13:11:26.226463] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.916 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.175 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:46.175 "name": "Existed_Raid", 00:11:46.175 "uuid": "474ac03a-10ef-4020-96ec-2b9417933c7b", 00:11:46.175 "strip_size_kb": 64, 00:11:46.175 "state": "offline", 00:11:46.175 "raid_level": "raid0", 00:11:46.175 "superblock": true, 00:11:46.175 "num_base_bdevs": 2, 00:11:46.175 "num_base_bdevs_discovered": 1, 00:11:46.175 "num_base_bdevs_operational": 1, 00:11:46.175 "base_bdevs_list": [ 00:11:46.175 { 00:11:46.175 "name": null, 00:11:46.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.175 "is_configured": false, 00:11:46.175 "data_offset": 2048, 00:11:46.175 "data_size": 63488 00:11:46.175 }, 00:11:46.175 { 00:11:46.175 "name": "BaseBdev2", 00:11:46.175 "uuid": "d0888646-e0d7-411b-a9e0-6eec7c316d15", 00:11:46.175 "is_configured": true, 00:11:46.175 "data_offset": 2048, 00:11:46.175 "data_size": 63488 00:11:46.175 } 00:11:46.175 ] 00:11:46.175 }' 00:11:46.175 13:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:46.175 13:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.743 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:46.743 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:46.743 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.743 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:47.001 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:47.001 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:47.001 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:47.001 [2024-07-26 13:11:27.494725] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:47.001 [2024-07-26 13:11:27.494770] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f04610 name Existed_Raid, state offline 00:11:47.001 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:47.001 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:47.001 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.001 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:47.259 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:47.259 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:47.259 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:11:47.259 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 655445 00:11:47.259 13:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 655445 ']' 00:11:47.259 13:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 655445 00:11:47.260 13:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:47.260 13:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.260 13:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 655445 00:11:47.518 13:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:47.518 13:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:47.518 13:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 655445' 00:11:47.518 killing process with pid 655445 00:11:47.518 13:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 655445 00:11:47.518 [2024-07-26 13:11:27.801915] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.518 13:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 655445 00:11:47.519 [2024-07-26 13:11:27.802779] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.519 13:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:47.519 00:11:47.519 real 0m9.548s 00:11:47.519 user 0m16.935s 00:11:47.519 sys 0m1.805s 00:11:47.519 13:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:47.519 13:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.519 ************************************ 00:11:47.519 END TEST raid_state_function_test_sb 00:11:47.519 ************************************ 00:11:47.519 13:11:28 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:11:47.519 13:11:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:47.519 13:11:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:47.519 13:11:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.778 ************************************ 00:11:47.778 START TEST raid_superblock_test 00:11:47.778 ************************************ 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=657349 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 657349 /var/tmp/spdk-raid.sock 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 657349 ']' 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:47.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.778 13:11:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.778 [2024-07-26 13:11:28.135755] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:11:47.778 [2024-07-26 13:11:28.135810] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid657349 ] 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.778 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:47.778 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.779 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:47.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.779 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:47.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.779 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:47.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.779 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:47.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.779 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:47.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.779 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:47.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.779 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:47.779 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:47.779 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:47.779 [2024-07-26 13:11:28.265751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.037 [2024-07-26 13:11:28.352090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.037 [2024-07-26 13:11:28.411264] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.038 [2024-07-26 13:11:28.411322] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.604 13:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:48.604 13:11:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:48.604 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:11:48.604 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:48.604 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:11:48.605 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:11:48.605 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:48.605 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:48.605 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:11:48.605 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:48.605 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:48.863 malloc1 00:11:48.863 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:49.123 [2024-07-26 13:11:29.480203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:49.123 [2024-07-26 13:11:29.480246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.123 [2024-07-26 13:11:29.480264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd932f0 00:11:49.123 [2024-07-26 13:11:29.480275] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.123 [2024-07-26 13:11:29.481779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.123 [2024-07-26 13:11:29.481806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:49.123 pt1 00:11:49.123 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:11:49.123 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:49.123 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:11:49.123 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:11:49.123 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:49.123 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:49.123 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:11:49.123 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:49.123 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:49.382 malloc2 00:11:49.382 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.641 [2024-07-26 13:11:29.941797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.641 [2024-07-26 13:11:29.941836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.641 [2024-07-26 13:11:29.941851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd946d0 00:11:49.641 [2024-07-26 13:11:29.941863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.641 [2024-07-26 13:11:29.943295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.641 [2024-07-26 13:11:29.943323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.641 pt2 00:11:49.641 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:11:49.641 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:11:49.641 13:11:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:11:49.641 [2024-07-26 13:11:30.166422] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:49.641 [2024-07-26 13:11:30.167672] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.900 [2024-07-26 13:11:30.167797] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xf2d310 00:11:49.900 [2024-07-26 13:11:30.167809] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:49.900 [2024-07-26 13:11:30.168003] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd8c3d0 00:11:49.900 [2024-07-26 13:11:30.168129] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf2d310 00:11:49.900 [2024-07-26 13:11:30.168149] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf2d310 00:11:49.900 [2024-07-26 13:11:30.168256] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:49.900 "name": "raid_bdev1", 00:11:49.900 "uuid": "adf8e16b-a53b-4042-a5e2-e008a802684b", 00:11:49.900 "strip_size_kb": 64, 00:11:49.900 "state": "online", 00:11:49.900 "raid_level": "raid0", 00:11:49.900 "superblock": true, 00:11:49.900 "num_base_bdevs": 2, 00:11:49.900 "num_base_bdevs_discovered": 2, 00:11:49.900 "num_base_bdevs_operational": 2, 00:11:49.900 "base_bdevs_list": [ 00:11:49.900 { 00:11:49.900 "name": "pt1", 00:11:49.900 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:49.900 "is_configured": true, 00:11:49.900 "data_offset": 2048, 00:11:49.900 "data_size": 63488 00:11:49.900 }, 00:11:49.900 { 00:11:49.900 "name": "pt2", 00:11:49.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:49.900 "is_configured": true, 00:11:49.900 "data_offset": 2048, 00:11:49.900 "data_size": 63488 00:11:49.900 } 00:11:49.900 ] 00:11:49.900 }' 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:49.900 13:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.531 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:11:50.531 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:50.531 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:50.531 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:50.531 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:50.531 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:50.531 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:50.531 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:50.790 [2024-07-26 13:11:31.157209] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.790 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:50.790 "name": "raid_bdev1", 00:11:50.790 "aliases": [ 00:11:50.790 "adf8e16b-a53b-4042-a5e2-e008a802684b" 00:11:50.790 ], 00:11:50.790 "product_name": "Raid Volume", 00:11:50.790 "block_size": 512, 00:11:50.790 "num_blocks": 126976, 00:11:50.790 "uuid": "adf8e16b-a53b-4042-a5e2-e008a802684b", 00:11:50.790 "assigned_rate_limits": { 00:11:50.790 "rw_ios_per_sec": 0, 00:11:50.790 "rw_mbytes_per_sec": 0, 00:11:50.790 "r_mbytes_per_sec": 0, 00:11:50.790 "w_mbytes_per_sec": 0 00:11:50.790 }, 00:11:50.790 "claimed": false, 00:11:50.790 "zoned": false, 00:11:50.790 "supported_io_types": { 00:11:50.790 "read": true, 00:11:50.790 "write": true, 00:11:50.790 "unmap": true, 00:11:50.790 "flush": true, 00:11:50.790 "reset": true, 00:11:50.790 "nvme_admin": false, 00:11:50.790 "nvme_io": false, 00:11:50.790 "nvme_io_md": false, 00:11:50.790 "write_zeroes": true, 00:11:50.790 "zcopy": false, 00:11:50.790 "get_zone_info": false, 00:11:50.790 "zone_management": false, 00:11:50.790 "zone_append": false, 00:11:50.790 "compare": false, 00:11:50.790 "compare_and_write": false, 00:11:50.790 "abort": false, 00:11:50.790 "seek_hole": false, 00:11:50.790 "seek_data": false, 00:11:50.790 "copy": false, 00:11:50.790 "nvme_iov_md": false 00:11:50.790 }, 00:11:50.790 "memory_domains": [ 00:11:50.790 { 00:11:50.790 "dma_device_id": "system", 00:11:50.790 "dma_device_type": 1 00:11:50.790 }, 00:11:50.790 { 00:11:50.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.790 "dma_device_type": 2 00:11:50.790 }, 00:11:50.790 { 00:11:50.790 "dma_device_id": "system", 00:11:50.790 "dma_device_type": 1 00:11:50.790 }, 00:11:50.790 { 00:11:50.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.790 "dma_device_type": 2 00:11:50.790 } 00:11:50.790 ], 00:11:50.790 "driver_specific": { 00:11:50.790 "raid": { 00:11:50.790 "uuid": "adf8e16b-a53b-4042-a5e2-e008a802684b", 00:11:50.790 "strip_size_kb": 64, 00:11:50.790 "state": "online", 00:11:50.790 "raid_level": "raid0", 00:11:50.790 "superblock": true, 00:11:50.790 "num_base_bdevs": 2, 00:11:50.790 "num_base_bdevs_discovered": 2, 00:11:50.790 "num_base_bdevs_operational": 2, 00:11:50.790 "base_bdevs_list": [ 00:11:50.790 { 00:11:50.790 "name": "pt1", 00:11:50.790 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.790 "is_configured": true, 00:11:50.790 "data_offset": 2048, 00:11:50.790 "data_size": 63488 00:11:50.790 }, 00:11:50.790 { 00:11:50.790 "name": "pt2", 00:11:50.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.790 "is_configured": true, 00:11:50.790 "data_offset": 2048, 00:11:50.790 "data_size": 63488 00:11:50.790 } 00:11:50.790 ] 00:11:50.790 } 00:11:50.790 } 00:11:50.790 }' 00:11:50.790 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.790 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:50.790 pt2' 00:11:50.790 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:50.790 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:50.790 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:51.049 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:51.049 "name": "pt1", 00:11:51.049 "aliases": [ 00:11:51.049 "00000000-0000-0000-0000-000000000001" 00:11:51.049 ], 00:11:51.049 "product_name": "passthru", 00:11:51.049 "block_size": 512, 00:11:51.049 "num_blocks": 65536, 00:11:51.049 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:51.049 "assigned_rate_limits": { 00:11:51.049 "rw_ios_per_sec": 0, 00:11:51.049 "rw_mbytes_per_sec": 0, 00:11:51.049 "r_mbytes_per_sec": 0, 00:11:51.049 "w_mbytes_per_sec": 0 00:11:51.049 }, 00:11:51.049 "claimed": true, 00:11:51.049 "claim_type": "exclusive_write", 00:11:51.049 "zoned": false, 00:11:51.049 "supported_io_types": { 00:11:51.049 "read": true, 00:11:51.049 "write": true, 00:11:51.049 "unmap": true, 00:11:51.049 "flush": true, 00:11:51.049 "reset": true, 00:11:51.049 "nvme_admin": false, 00:11:51.049 "nvme_io": false, 00:11:51.049 "nvme_io_md": false, 00:11:51.049 "write_zeroes": true, 00:11:51.049 "zcopy": true, 00:11:51.049 "get_zone_info": false, 00:11:51.049 "zone_management": false, 00:11:51.049 "zone_append": false, 00:11:51.049 "compare": false, 00:11:51.049 "compare_and_write": false, 00:11:51.049 "abort": true, 00:11:51.049 "seek_hole": false, 00:11:51.049 "seek_data": false, 00:11:51.049 "copy": true, 00:11:51.049 "nvme_iov_md": false 00:11:51.049 }, 00:11:51.049 "memory_domains": [ 00:11:51.049 { 00:11:51.049 "dma_device_id": "system", 00:11:51.049 "dma_device_type": 1 00:11:51.049 }, 00:11:51.049 { 00:11:51.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.049 "dma_device_type": 2 00:11:51.049 } 00:11:51.049 ], 00:11:51.049 "driver_specific": { 00:11:51.049 "passthru": { 00:11:51.049 "name": "pt1", 00:11:51.049 "base_bdev_name": "malloc1" 00:11:51.049 } 00:11:51.049 } 00:11:51.049 }' 00:11:51.049 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.049 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.049 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:51.049 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.049 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.308 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:51.308 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.308 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.308 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:51.308 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.308 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.308 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:51.308 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:51.308 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:51.308 13:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:51.567 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:51.567 "name": "pt2", 00:11:51.567 "aliases": [ 00:11:51.567 "00000000-0000-0000-0000-000000000002" 00:11:51.567 ], 00:11:51.567 "product_name": "passthru", 00:11:51.567 "block_size": 512, 00:11:51.567 "num_blocks": 65536, 00:11:51.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.567 "assigned_rate_limits": { 00:11:51.567 "rw_ios_per_sec": 0, 00:11:51.567 "rw_mbytes_per_sec": 0, 00:11:51.567 "r_mbytes_per_sec": 0, 00:11:51.567 "w_mbytes_per_sec": 0 00:11:51.567 }, 00:11:51.567 "claimed": true, 00:11:51.567 "claim_type": "exclusive_write", 00:11:51.567 "zoned": false, 00:11:51.567 "supported_io_types": { 00:11:51.567 "read": true, 00:11:51.568 "write": true, 00:11:51.568 "unmap": true, 00:11:51.568 "flush": true, 00:11:51.568 "reset": true, 00:11:51.568 "nvme_admin": false, 00:11:51.568 "nvme_io": false, 00:11:51.568 "nvme_io_md": false, 00:11:51.568 "write_zeroes": true, 00:11:51.568 "zcopy": true, 00:11:51.568 "get_zone_info": false, 00:11:51.568 "zone_management": false, 00:11:51.568 "zone_append": false, 00:11:51.568 "compare": false, 00:11:51.568 "compare_and_write": false, 00:11:51.568 "abort": true, 00:11:51.568 "seek_hole": false, 00:11:51.568 "seek_data": false, 00:11:51.568 "copy": true, 00:11:51.568 "nvme_iov_md": false 00:11:51.568 }, 00:11:51.568 "memory_domains": [ 00:11:51.568 { 00:11:51.568 "dma_device_id": "system", 00:11:51.568 "dma_device_type": 1 00:11:51.568 }, 00:11:51.568 { 00:11:51.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.568 "dma_device_type": 2 00:11:51.568 } 00:11:51.568 ], 00:11:51.568 "driver_specific": { 00:11:51.568 "passthru": { 00:11:51.568 "name": "pt2", 00:11:51.568 "base_bdev_name": "malloc2" 00:11:51.568 } 00:11:51.568 } 00:11:51.568 }' 00:11:51.568 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.568 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:51.568 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:51.826 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.826 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:51.826 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:51.826 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.826 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:51.826 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:51.826 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.826 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:51.826 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:51.826 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:51.826 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:11:52.085 [2024-07-26 13:11:32.544869] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.085 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=adf8e16b-a53b-4042-a5e2-e008a802684b 00:11:52.085 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z adf8e16b-a53b-4042-a5e2-e008a802684b ']' 00:11:52.085 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:52.344 [2024-07-26 13:11:32.773242] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:52.344 [2024-07-26 13:11:32.773263] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:52.344 [2024-07-26 13:11:32.773315] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:52.344 [2024-07-26 13:11:32.773357] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:52.344 [2024-07-26 13:11:32.773368] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf2d310 name raid_bdev1, state offline 00:11:52.344 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.344 13:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:11:52.603 13:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:11:52.603 13:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:11:52.603 13:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:11:52.603 13:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:52.861 13:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:11:52.861 13:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:53.120 13:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:53.120 13:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:53.378 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:11:53.636 [2024-07-26 13:11:33.916218] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:53.636 [2024-07-26 13:11:33.917467] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:53.636 [2024-07-26 13:11:33.917519] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:53.636 [2024-07-26 13:11:33.917557] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:53.636 [2024-07-26 13:11:33.917574] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.636 [2024-07-26 13:11:33.917583] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf363f0 name raid_bdev1, state configuring 00:11:53.636 request: 00:11:53.636 { 00:11:53.636 "name": "raid_bdev1", 00:11:53.636 "raid_level": "raid0", 00:11:53.636 "base_bdevs": [ 00:11:53.636 "malloc1", 00:11:53.636 "malloc2" 00:11:53.636 ], 00:11:53.636 "strip_size_kb": 64, 00:11:53.636 "superblock": false, 00:11:53.636 "method": "bdev_raid_create", 00:11:53.636 "req_id": 1 00:11:53.636 } 00:11:53.636 Got JSON-RPC error response 00:11:53.636 response: 00:11:53.636 { 00:11:53.636 "code": -17, 00:11:53.636 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:53.636 } 00:11:53.636 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:53.636 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:53.636 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:53.636 13:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:53.636 13:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:11:53.636 13:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:53.895 [2024-07-26 13:11:34.373359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:53.895 [2024-07-26 13:11:34.373401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.895 [2024-07-26 13:11:34.373418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf36d70 00:11:53.895 [2024-07-26 13:11:34.373429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.895 [2024-07-26 13:11:34.374899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.895 [2024-07-26 13:11:34.374925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:53.895 [2024-07-26 13:11:34.374991] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:53.895 [2024-07-26 13:11:34.375017] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:53.895 pt1 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.895 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.154 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:54.154 "name": "raid_bdev1", 00:11:54.154 "uuid": "adf8e16b-a53b-4042-a5e2-e008a802684b", 00:11:54.154 "strip_size_kb": 64, 00:11:54.154 "state": "configuring", 00:11:54.154 "raid_level": "raid0", 00:11:54.154 "superblock": true, 00:11:54.154 "num_base_bdevs": 2, 00:11:54.154 "num_base_bdevs_discovered": 1, 00:11:54.154 "num_base_bdevs_operational": 2, 00:11:54.154 "base_bdevs_list": [ 00:11:54.154 { 00:11:54.154 "name": "pt1", 00:11:54.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:54.154 "is_configured": true, 00:11:54.154 "data_offset": 2048, 00:11:54.154 "data_size": 63488 00:11:54.154 }, 00:11:54.154 { 00:11:54.154 "name": null, 00:11:54.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.154 "is_configured": false, 00:11:54.154 "data_offset": 2048, 00:11:54.154 "data_size": 63488 00:11:54.154 } 00:11:54.154 ] 00:11:54.154 }' 00:11:54.154 13:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:54.154 13:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.722 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:11:54.722 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:11:54.722 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:11:54.722 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:54.981 [2024-07-26 13:11:35.347936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:54.981 [2024-07-26 13:11:35.347979] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.981 [2024-07-26 13:11:35.347996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf2dd30 00:11:54.981 [2024-07-26 13:11:35.348007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.981 [2024-07-26 13:11:35.348323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.981 [2024-07-26 13:11:35.348341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:54.981 [2024-07-26 13:11:35.348398] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:54.981 [2024-07-26 13:11:35.348416] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:54.981 [2024-07-26 13:11:35.348506] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xf2bac0 00:11:54.981 [2024-07-26 13:11:35.348516] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:54.981 [2024-07-26 13:11:35.348666] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd8b9b0 00:11:54.981 [2024-07-26 13:11:35.348784] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf2bac0 00:11:54.981 [2024-07-26 13:11:35.348793] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf2bac0 00:11:54.981 [2024-07-26 13:11:35.348879] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.981 pt2 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.981 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.240 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:55.240 "name": "raid_bdev1", 00:11:55.240 "uuid": "adf8e16b-a53b-4042-a5e2-e008a802684b", 00:11:55.240 "strip_size_kb": 64, 00:11:55.240 "state": "online", 00:11:55.240 "raid_level": "raid0", 00:11:55.240 "superblock": true, 00:11:55.240 "num_base_bdevs": 2, 00:11:55.240 "num_base_bdevs_discovered": 2, 00:11:55.240 "num_base_bdevs_operational": 2, 00:11:55.240 "base_bdevs_list": [ 00:11:55.240 { 00:11:55.240 "name": "pt1", 00:11:55.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:55.240 "is_configured": true, 00:11:55.240 "data_offset": 2048, 00:11:55.240 "data_size": 63488 00:11:55.240 }, 00:11:55.240 { 00:11:55.240 "name": "pt2", 00:11:55.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.240 "is_configured": true, 00:11:55.240 "data_offset": 2048, 00:11:55.240 "data_size": 63488 00:11:55.240 } 00:11:55.240 ] 00:11:55.240 }' 00:11:55.240 13:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:55.240 13:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.808 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:11:55.808 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:55.808 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:55.808 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:55.808 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:55.808 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:55.808 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:55.808 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:56.067 [2024-07-26 13:11:36.378889] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.067 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:56.067 "name": "raid_bdev1", 00:11:56.067 "aliases": [ 00:11:56.067 "adf8e16b-a53b-4042-a5e2-e008a802684b" 00:11:56.067 ], 00:11:56.067 "product_name": "Raid Volume", 00:11:56.067 "block_size": 512, 00:11:56.067 "num_blocks": 126976, 00:11:56.067 "uuid": "adf8e16b-a53b-4042-a5e2-e008a802684b", 00:11:56.067 "assigned_rate_limits": { 00:11:56.067 "rw_ios_per_sec": 0, 00:11:56.067 "rw_mbytes_per_sec": 0, 00:11:56.067 "r_mbytes_per_sec": 0, 00:11:56.067 "w_mbytes_per_sec": 0 00:11:56.067 }, 00:11:56.067 "claimed": false, 00:11:56.067 "zoned": false, 00:11:56.067 "supported_io_types": { 00:11:56.067 "read": true, 00:11:56.067 "write": true, 00:11:56.067 "unmap": true, 00:11:56.067 "flush": true, 00:11:56.067 "reset": true, 00:11:56.067 "nvme_admin": false, 00:11:56.067 "nvme_io": false, 00:11:56.067 "nvme_io_md": false, 00:11:56.067 "write_zeroes": true, 00:11:56.067 "zcopy": false, 00:11:56.067 "get_zone_info": false, 00:11:56.067 "zone_management": false, 00:11:56.067 "zone_append": false, 00:11:56.067 "compare": false, 00:11:56.067 "compare_and_write": false, 00:11:56.067 "abort": false, 00:11:56.067 "seek_hole": false, 00:11:56.067 "seek_data": false, 00:11:56.067 "copy": false, 00:11:56.067 "nvme_iov_md": false 00:11:56.067 }, 00:11:56.067 "memory_domains": [ 00:11:56.067 { 00:11:56.067 "dma_device_id": "system", 00:11:56.067 "dma_device_type": 1 00:11:56.067 }, 00:11:56.067 { 00:11:56.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.067 "dma_device_type": 2 00:11:56.067 }, 00:11:56.067 { 00:11:56.067 "dma_device_id": "system", 00:11:56.067 "dma_device_type": 1 00:11:56.067 }, 00:11:56.067 { 00:11:56.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.067 "dma_device_type": 2 00:11:56.067 } 00:11:56.067 ], 00:11:56.067 "driver_specific": { 00:11:56.067 "raid": { 00:11:56.067 "uuid": "adf8e16b-a53b-4042-a5e2-e008a802684b", 00:11:56.067 "strip_size_kb": 64, 00:11:56.067 "state": "online", 00:11:56.067 "raid_level": "raid0", 00:11:56.067 "superblock": true, 00:11:56.067 "num_base_bdevs": 2, 00:11:56.067 "num_base_bdevs_discovered": 2, 00:11:56.067 "num_base_bdevs_operational": 2, 00:11:56.067 "base_bdevs_list": [ 00:11:56.067 { 00:11:56.067 "name": "pt1", 00:11:56.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.067 "is_configured": true, 00:11:56.067 "data_offset": 2048, 00:11:56.067 "data_size": 63488 00:11:56.067 }, 00:11:56.067 { 00:11:56.067 "name": "pt2", 00:11:56.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.067 "is_configured": true, 00:11:56.067 "data_offset": 2048, 00:11:56.067 "data_size": 63488 00:11:56.067 } 00:11:56.067 ] 00:11:56.067 } 00:11:56.067 } 00:11:56.067 }' 00:11:56.067 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:56.067 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:56.067 pt2' 00:11:56.067 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:56.068 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:56.068 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:56.327 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.327 "name": "pt1", 00:11:56.327 "aliases": [ 00:11:56.327 "00000000-0000-0000-0000-000000000001" 00:11:56.327 ], 00:11:56.327 "product_name": "passthru", 00:11:56.327 "block_size": 512, 00:11:56.327 "num_blocks": 65536, 00:11:56.327 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:56.327 "assigned_rate_limits": { 00:11:56.327 "rw_ios_per_sec": 0, 00:11:56.327 "rw_mbytes_per_sec": 0, 00:11:56.327 "r_mbytes_per_sec": 0, 00:11:56.327 "w_mbytes_per_sec": 0 00:11:56.327 }, 00:11:56.327 "claimed": true, 00:11:56.327 "claim_type": "exclusive_write", 00:11:56.327 "zoned": false, 00:11:56.327 "supported_io_types": { 00:11:56.327 "read": true, 00:11:56.327 "write": true, 00:11:56.327 "unmap": true, 00:11:56.327 "flush": true, 00:11:56.327 "reset": true, 00:11:56.327 "nvme_admin": false, 00:11:56.327 "nvme_io": false, 00:11:56.327 "nvme_io_md": false, 00:11:56.327 "write_zeroes": true, 00:11:56.327 "zcopy": true, 00:11:56.327 "get_zone_info": false, 00:11:56.327 "zone_management": false, 00:11:56.327 "zone_append": false, 00:11:56.327 "compare": false, 00:11:56.327 "compare_and_write": false, 00:11:56.327 "abort": true, 00:11:56.327 "seek_hole": false, 00:11:56.327 "seek_data": false, 00:11:56.327 "copy": true, 00:11:56.327 "nvme_iov_md": false 00:11:56.327 }, 00:11:56.327 "memory_domains": [ 00:11:56.327 { 00:11:56.327 "dma_device_id": "system", 00:11:56.327 "dma_device_type": 1 00:11:56.327 }, 00:11:56.327 { 00:11:56.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.327 "dma_device_type": 2 00:11:56.327 } 00:11:56.327 ], 00:11:56.327 "driver_specific": { 00:11:56.327 "passthru": { 00:11:56.327 "name": "pt1", 00:11:56.327 "base_bdev_name": "malloc1" 00:11:56.327 } 00:11:56.327 } 00:11:56.327 }' 00:11:56.327 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.327 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.327 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:56.327 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.327 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.586 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:56.586 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.586 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.586 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:56.586 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.586 13:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.586 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:56.586 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:56.586 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:56.586 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:56.845 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.845 "name": "pt2", 00:11:56.845 "aliases": [ 00:11:56.845 "00000000-0000-0000-0000-000000000002" 00:11:56.845 ], 00:11:56.845 "product_name": "passthru", 00:11:56.845 "block_size": 512, 00:11:56.845 "num_blocks": 65536, 00:11:56.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.845 "assigned_rate_limits": { 00:11:56.845 "rw_ios_per_sec": 0, 00:11:56.845 "rw_mbytes_per_sec": 0, 00:11:56.845 "r_mbytes_per_sec": 0, 00:11:56.845 "w_mbytes_per_sec": 0 00:11:56.845 }, 00:11:56.845 "claimed": true, 00:11:56.845 "claim_type": "exclusive_write", 00:11:56.845 "zoned": false, 00:11:56.845 "supported_io_types": { 00:11:56.845 "read": true, 00:11:56.845 "write": true, 00:11:56.845 "unmap": true, 00:11:56.845 "flush": true, 00:11:56.845 "reset": true, 00:11:56.845 "nvme_admin": false, 00:11:56.845 "nvme_io": false, 00:11:56.845 "nvme_io_md": false, 00:11:56.845 "write_zeroes": true, 00:11:56.845 "zcopy": true, 00:11:56.845 "get_zone_info": false, 00:11:56.845 "zone_management": false, 00:11:56.845 "zone_append": false, 00:11:56.845 "compare": false, 00:11:56.845 "compare_and_write": false, 00:11:56.845 "abort": true, 00:11:56.845 "seek_hole": false, 00:11:56.845 "seek_data": false, 00:11:56.845 "copy": true, 00:11:56.845 "nvme_iov_md": false 00:11:56.845 }, 00:11:56.845 "memory_domains": [ 00:11:56.845 { 00:11:56.845 "dma_device_id": "system", 00:11:56.845 "dma_device_type": 1 00:11:56.845 }, 00:11:56.845 { 00:11:56.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.845 "dma_device_type": 2 00:11:56.845 } 00:11:56.845 ], 00:11:56.845 "driver_specific": { 00:11:56.845 "passthru": { 00:11:56.845 "name": "pt2", 00:11:56.845 "base_bdev_name": "malloc2" 00:11:56.845 } 00:11:56.845 } 00:11:56.845 }' 00:11:56.845 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.845 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.845 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:56.845 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.104 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:57.104 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:57.104 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.104 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:57.104 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:57.104 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.104 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:57.104 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:57.104 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:57.104 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:11:57.363 [2024-07-26 13:11:37.794590] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' adf8e16b-a53b-4042-a5e2-e008a802684b '!=' adf8e16b-a53b-4042-a5e2-e008a802684b ']' 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 657349 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 657349 ']' 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 657349 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 657349 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 657349' 00:11:57.363 killing process with pid 657349 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 657349 00:11:57.363 [2024-07-26 13:11:37.872608] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.363 [2024-07-26 13:11:37.872657] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.363 [2024-07-26 13:11:37.872698] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.363 [2024-07-26 13:11:37.872708] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf2bac0 name raid_bdev1, state offline 00:11:57.363 13:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 657349 00:11:57.363 [2024-07-26 13:11:37.888107] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.623 13:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:11:57.623 00:11:57.623 real 0m9.998s 00:11:57.623 user 0m17.810s 00:11:57.623 sys 0m1.861s 00:11:57.623 13:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.623 13:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.623 ************************************ 00:11:57.623 END TEST raid_superblock_test 00:11:57.623 ************************************ 00:11:57.623 13:11:38 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:11:57.623 13:11:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:57.623 13:11:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.623 13:11:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:57.882 ************************************ 00:11:57.882 START TEST raid_read_error_test 00:11:57.882 ************************************ 00:11:57.882 13:11:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:11:57.882 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:11:57.882 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:11:57.882 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:11:57.882 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:11:57.882 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:57.882 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.9ut8ynSBZz 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=659167 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 659167 /var/tmp/spdk-raid.sock 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 659167 ']' 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:57.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.883 13:11:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.883 [2024-07-26 13:11:38.234690] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:11:57.883 [2024-07-26 13:11:38.234745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659167 ] 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:57.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:57.883 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:57.883 [2024-07-26 13:11:38.367099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.143 [2024-07-26 13:11:38.455098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.143 [2024-07-26 13:11:38.521579] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.143 [2024-07-26 13:11:38.521612] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.710 13:11:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.710 13:11:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:58.710 13:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:58.710 13:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:58.968 BaseBdev1_malloc 00:11:58.968 13:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:59.226 true 00:11:59.226 13:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:59.484 [2024-07-26 13:11:39.795909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:59.484 [2024-07-26 13:11:39.795950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.484 [2024-07-26 13:11:39.795968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2481190 00:11:59.484 [2024-07-26 13:11:39.795980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.484 [2024-07-26 13:11:39.797571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.484 [2024-07-26 13:11:39.797600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:59.484 BaseBdev1 00:11:59.484 13:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:11:59.484 13:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:59.742 BaseBdev2_malloc 00:11:59.742 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:59.742 true 00:12:00.000 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:00.000 [2024-07-26 13:11:40.481981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:00.000 [2024-07-26 13:11:40.482021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.000 [2024-07-26 13:11:40.482038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2485e20 00:12:00.000 [2024-07-26 13:11:40.482050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.000 [2024-07-26 13:11:40.483454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.000 [2024-07-26 13:11:40.483480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:00.000 BaseBdev2 00:12:00.000 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:00.258 [2024-07-26 13:11:40.706603] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.258 [2024-07-26 13:11:40.707773] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.259 [2024-07-26 13:11:40.707935] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2487a50 00:12:00.259 [2024-07-26 13:11:40.707947] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:00.259 [2024-07-26 13:11:40.708146] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2485ac0 00:12:00.259 [2024-07-26 13:11:40.708283] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2487a50 00:12:00.259 [2024-07-26 13:11:40.708293] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2487a50 00:12:00.259 [2024-07-26 13:11:40.708403] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.259 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:00.259 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:00.259 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:00.259 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:00.259 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:00.259 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:00.259 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.259 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.259 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.259 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.259 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.259 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.517 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:00.517 "name": "raid_bdev1", 00:12:00.517 "uuid": "881a1cbd-96cc-477e-a0c5-ab1011a2f307", 00:12:00.517 "strip_size_kb": 64, 00:12:00.517 "state": "online", 00:12:00.517 "raid_level": "raid0", 00:12:00.517 "superblock": true, 00:12:00.517 "num_base_bdevs": 2, 00:12:00.517 "num_base_bdevs_discovered": 2, 00:12:00.517 "num_base_bdevs_operational": 2, 00:12:00.517 "base_bdevs_list": [ 00:12:00.517 { 00:12:00.517 "name": "BaseBdev1", 00:12:00.517 "uuid": "69f7f9c9-2faf-5fdf-bc09-b72dd9c59e17", 00:12:00.517 "is_configured": true, 00:12:00.517 "data_offset": 2048, 00:12:00.517 "data_size": 63488 00:12:00.517 }, 00:12:00.517 { 00:12:00.517 "name": "BaseBdev2", 00:12:00.517 "uuid": "a2718027-1599-5af1-8296-0da8b79f0b85", 00:12:00.517 "is_configured": true, 00:12:00.517 "data_offset": 2048, 00:12:00.517 "data_size": 63488 00:12:00.517 } 00:12:00.517 ] 00:12:00.517 }' 00:12:00.517 13:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:00.517 13:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.085 13:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:12:01.085 13:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:01.343 [2024-07-26 13:11:41.625273] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24871b0 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.279 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.538 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.538 "name": "raid_bdev1", 00:12:02.538 "uuid": "881a1cbd-96cc-477e-a0c5-ab1011a2f307", 00:12:02.538 "strip_size_kb": 64, 00:12:02.538 "state": "online", 00:12:02.538 "raid_level": "raid0", 00:12:02.538 "superblock": true, 00:12:02.538 "num_base_bdevs": 2, 00:12:02.538 "num_base_bdevs_discovered": 2, 00:12:02.538 "num_base_bdevs_operational": 2, 00:12:02.538 "base_bdevs_list": [ 00:12:02.538 { 00:12:02.538 "name": "BaseBdev1", 00:12:02.538 "uuid": "69f7f9c9-2faf-5fdf-bc09-b72dd9c59e17", 00:12:02.538 "is_configured": true, 00:12:02.538 "data_offset": 2048, 00:12:02.538 "data_size": 63488 00:12:02.538 }, 00:12:02.538 { 00:12:02.538 "name": "BaseBdev2", 00:12:02.538 "uuid": "a2718027-1599-5af1-8296-0da8b79f0b85", 00:12:02.538 "is_configured": true, 00:12:02.538 "data_offset": 2048, 00:12:02.538 "data_size": 63488 00:12:02.538 } 00:12:02.538 ] 00:12:02.538 }' 00:12:02.538 13:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.538 13:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.142 13:11:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:03.418 [2024-07-26 13:11:43.758558] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.418 [2024-07-26 13:11:43.758595] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.418 [2024-07-26 13:11:43.761564] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.418 [2024-07-26 13:11:43.761598] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.418 [2024-07-26 13:11:43.761623] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.418 [2024-07-26 13:11:43.761639] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2487a50 name raid_bdev1, state offline 00:12:03.418 0 00:12:03.418 13:11:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 659167 00:12:03.418 13:11:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 659167 ']' 00:12:03.418 13:11:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 659167 00:12:03.418 13:11:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:03.418 13:11:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.418 13:11:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 659167 00:12:03.418 13:11:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:03.418 13:11:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:03.418 13:11:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 659167' 00:12:03.418 killing process with pid 659167 00:12:03.418 13:11:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 659167 00:12:03.418 [2024-07-26 13:11:43.835073] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.418 13:11:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 659167 00:12:03.418 [2024-07-26 13:11:43.844515] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.677 13:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.9ut8ynSBZz 00:12:03.677 13:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:12:03.677 13:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:12:03.677 13:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:12:03.677 13:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:12:03.677 13:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:03.677 13:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:03.677 13:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:12:03.677 00:12:03.677 real 0m5.888s 00:12:03.677 user 0m9.151s 00:12:03.677 sys 0m1.031s 00:12:03.677 13:11:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:03.677 13:11:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.677 ************************************ 00:12:03.677 END TEST raid_read_error_test 00:12:03.677 ************************************ 00:12:03.677 13:11:44 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:12:03.677 13:11:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:03.677 13:11:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:03.677 13:11:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.677 ************************************ 00:12:03.677 START TEST raid_write_error_test 00:12:03.677 ************************************ 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.zJOCFyZ3XW 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=660323 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 660323 /var/tmp/spdk-raid.sock 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 660323 ']' 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:03.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.677 13:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.937 [2024-07-26 13:11:44.206903] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:12:03.937 [2024-07-26 13:11:44.206960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid660323 ] 00:12:03.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.937 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:03.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.937 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:03.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.937 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:03.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.937 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:03.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.937 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:03.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.937 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:03.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.937 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:03.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:03.938 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:03.938 [2024-07-26 13:11:44.340096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.938 [2024-07-26 13:11:44.421300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.197 [2024-07-26 13:11:44.490165] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.197 [2024-07-26 13:11:44.490201] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.765 13:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.765 13:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:04.765 13:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:04.765 13:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:05.024 BaseBdev1_malloc 00:12:05.024 13:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:05.024 true 00:12:05.283 13:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:05.283 [2024-07-26 13:11:45.761537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:05.283 [2024-07-26 13:11:45.761580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.283 [2024-07-26 13:11:45.761597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b94190 00:12:05.283 [2024-07-26 13:11:45.761608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.283 [2024-07-26 13:11:45.763073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.283 [2024-07-26 13:11:45.763104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:05.283 BaseBdev1 00:12:05.283 13:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:05.283 13:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:05.542 BaseBdev2_malloc 00:12:05.542 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:05.801 true 00:12:05.801 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:06.060 [2024-07-26 13:11:46.439477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:06.060 [2024-07-26 13:11:46.439517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.060 [2024-07-26 13:11:46.439535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b98e20 00:12:06.060 [2024-07-26 13:11:46.439546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.060 [2024-07-26 13:11:46.440851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.060 [2024-07-26 13:11:46.440878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:06.060 BaseBdev2 00:12:06.060 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:06.319 [2024-07-26 13:11:46.668101] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.319 [2024-07-26 13:11:46.669191] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.319 [2024-07-26 13:11:46.669346] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b9aa50 00:12:06.319 [2024-07-26 13:11:46.669358] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:06.319 [2024-07-26 13:11:46.669529] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b98ac0 00:12:06.319 [2024-07-26 13:11:46.669657] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b9aa50 00:12:06.319 [2024-07-26 13:11:46.669666] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b9aa50 00:12:06.319 [2024-07-26 13:11:46.669766] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.319 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:06.319 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:06.319 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:06.319 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:06.319 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:06.319 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:06.319 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:06.319 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:06.319 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:06.319 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:06.319 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.319 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.578 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:06.578 "name": "raid_bdev1", 00:12:06.578 "uuid": "3cb90eba-1524-4dbd-a1a0-774d5057303b", 00:12:06.578 "strip_size_kb": 64, 00:12:06.578 "state": "online", 00:12:06.578 "raid_level": "raid0", 00:12:06.578 "superblock": true, 00:12:06.578 "num_base_bdevs": 2, 00:12:06.578 "num_base_bdevs_discovered": 2, 00:12:06.578 "num_base_bdevs_operational": 2, 00:12:06.578 "base_bdevs_list": [ 00:12:06.578 { 00:12:06.578 "name": "BaseBdev1", 00:12:06.578 "uuid": "5502816d-7f92-5035-ac48-e1dd73e11367", 00:12:06.578 "is_configured": true, 00:12:06.579 "data_offset": 2048, 00:12:06.579 "data_size": 63488 00:12:06.579 }, 00:12:06.579 { 00:12:06.579 "name": "BaseBdev2", 00:12:06.579 "uuid": "34333b02-2eb7-5a88-ada2-a837e179572d", 00:12:06.579 "is_configured": true, 00:12:06.579 "data_offset": 2048, 00:12:06.579 "data_size": 63488 00:12:06.579 } 00:12:06.579 ] 00:12:06.579 }' 00:12:06.579 13:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:06.579 13:11:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.147 13:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:12:07.147 13:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:07.147 [2024-07-26 13:11:47.570846] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b9a1b0 00:12:08.085 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:08.344 "name": "raid_bdev1", 00:12:08.344 "uuid": "3cb90eba-1524-4dbd-a1a0-774d5057303b", 00:12:08.344 "strip_size_kb": 64, 00:12:08.344 "state": "online", 00:12:08.344 "raid_level": "raid0", 00:12:08.344 "superblock": true, 00:12:08.344 "num_base_bdevs": 2, 00:12:08.344 "num_base_bdevs_discovered": 2, 00:12:08.344 "num_base_bdevs_operational": 2, 00:12:08.344 "base_bdevs_list": [ 00:12:08.344 { 00:12:08.344 "name": "BaseBdev1", 00:12:08.344 "uuid": "5502816d-7f92-5035-ac48-e1dd73e11367", 00:12:08.344 "is_configured": true, 00:12:08.344 "data_offset": 2048, 00:12:08.344 "data_size": 63488 00:12:08.344 }, 00:12:08.344 { 00:12:08.344 "name": "BaseBdev2", 00:12:08.344 "uuid": "34333b02-2eb7-5a88-ada2-a837e179572d", 00:12:08.344 "is_configured": true, 00:12:08.344 "data_offset": 2048, 00:12:08.344 "data_size": 63488 00:12:08.344 } 00:12:08.344 ] 00:12:08.344 }' 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:08.344 13:11:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.913 13:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:09.172 [2024-07-26 13:11:49.636040] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:09.172 [2024-07-26 13:11:49.636074] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.172 [2024-07-26 13:11:49.639003] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.172 [2024-07-26 13:11:49.639034] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.172 [2024-07-26 13:11:49.639057] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.172 [2024-07-26 13:11:49.639067] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b9aa50 name raid_bdev1, state offline 00:12:09.172 0 00:12:09.172 13:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 660323 00:12:09.172 13:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 660323 ']' 00:12:09.172 13:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 660323 00:12:09.172 13:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:09.172 13:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.172 13:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 660323 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 660323' 00:12:09.432 killing process with pid 660323 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 660323 00:12:09.432 [2024-07-26 13:11:49.712593] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 660323 00:12:09.432 [2024-07-26 13:11:49.722411] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.zJOCFyZ3XW 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.49 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.49 != \0\.\0\0 ]] 00:12:09.432 00:12:09.432 real 0m5.797s 00:12:09.432 user 0m8.899s 00:12:09.432 sys 0m1.067s 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.432 13:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.432 ************************************ 00:12:09.432 END TEST raid_write_error_test 00:12:09.432 ************************************ 00:12:09.692 13:11:49 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:12:09.692 13:11:49 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:12:09.692 13:11:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:09.692 13:11:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.692 13:11:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:09.692 ************************************ 00:12:09.692 START TEST raid_state_function_test 00:12:09.692 ************************************ 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=661473 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 661473' 00:12:09.692 Process raid pid: 661473 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 661473 /var/tmp/spdk-raid.sock 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 661473 ']' 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:09.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.692 13:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.692 [2024-07-26 13:11:50.081734] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:12:09.692 [2024-07-26 13:11:50.081795] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:09.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.692 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:09.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:09.693 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:09.693 [2024-07-26 13:11:50.214539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.952 [2024-07-26 13:11:50.301323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.952 [2024-07-26 13:11:50.360267] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.952 [2024-07-26 13:11:50.360300] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:10.521 13:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.521 13:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:10.521 13:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:10.780 [2024-07-26 13:11:51.175269] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:10.780 [2024-07-26 13:11:51.175311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:10.780 [2024-07-26 13:11:51.175322] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.780 [2024-07-26 13:11:51.175333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.780 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:10.780 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:10.780 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:10.780 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:10.780 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:10.780 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:10.780 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.780 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.780 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.780 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.780 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.780 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.038 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:11.038 "name": "Existed_Raid", 00:12:11.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.038 "strip_size_kb": 64, 00:12:11.038 "state": "configuring", 00:12:11.038 "raid_level": "concat", 00:12:11.038 "superblock": false, 00:12:11.038 "num_base_bdevs": 2, 00:12:11.038 "num_base_bdevs_discovered": 0, 00:12:11.038 "num_base_bdevs_operational": 2, 00:12:11.038 "base_bdevs_list": [ 00:12:11.038 { 00:12:11.038 "name": "BaseBdev1", 00:12:11.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.038 "is_configured": false, 00:12:11.038 "data_offset": 0, 00:12:11.038 "data_size": 0 00:12:11.038 }, 00:12:11.038 { 00:12:11.038 "name": "BaseBdev2", 00:12:11.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.038 "is_configured": false, 00:12:11.038 "data_offset": 0, 00:12:11.038 "data_size": 0 00:12:11.038 } 00:12:11.038 ] 00:12:11.038 }' 00:12:11.038 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:11.038 13:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.606 13:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:11.865 [2024-07-26 13:11:52.201862] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.865 [2024-07-26 13:11:52.201892] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1147f20 name Existed_Raid, state configuring 00:12:11.865 13:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:12.131 [2024-07-26 13:11:52.430472] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.131 [2024-07-26 13:11:52.430496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.131 [2024-07-26 13:11:52.430505] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:12.131 [2024-07-26 13:11:52.430516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:12.131 13:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:12.391 [2024-07-26 13:11:52.668565] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.391 BaseBdev1 00:12:12.391 13:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:12.391 13:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:12.391 13:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:12.391 13:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:12.391 13:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:12.391 13:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:12.391 13:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:12.391 13:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:12.650 [ 00:12:12.650 { 00:12:12.650 "name": "BaseBdev1", 00:12:12.650 "aliases": [ 00:12:12.650 "ed06eecd-f1fa-4047-8a04-6bfd3e98cfae" 00:12:12.650 ], 00:12:12.650 "product_name": "Malloc disk", 00:12:12.650 "block_size": 512, 00:12:12.650 "num_blocks": 65536, 00:12:12.650 "uuid": "ed06eecd-f1fa-4047-8a04-6bfd3e98cfae", 00:12:12.650 "assigned_rate_limits": { 00:12:12.650 "rw_ios_per_sec": 0, 00:12:12.650 "rw_mbytes_per_sec": 0, 00:12:12.650 "r_mbytes_per_sec": 0, 00:12:12.650 "w_mbytes_per_sec": 0 00:12:12.650 }, 00:12:12.650 "claimed": true, 00:12:12.650 "claim_type": "exclusive_write", 00:12:12.650 "zoned": false, 00:12:12.650 "supported_io_types": { 00:12:12.650 "read": true, 00:12:12.650 "write": true, 00:12:12.650 "unmap": true, 00:12:12.650 "flush": true, 00:12:12.650 "reset": true, 00:12:12.650 "nvme_admin": false, 00:12:12.650 "nvme_io": false, 00:12:12.650 "nvme_io_md": false, 00:12:12.650 "write_zeroes": true, 00:12:12.650 "zcopy": true, 00:12:12.650 "get_zone_info": false, 00:12:12.650 "zone_management": false, 00:12:12.650 "zone_append": false, 00:12:12.650 "compare": false, 00:12:12.650 "compare_and_write": false, 00:12:12.650 "abort": true, 00:12:12.650 "seek_hole": false, 00:12:12.650 "seek_data": false, 00:12:12.650 "copy": true, 00:12:12.650 "nvme_iov_md": false 00:12:12.650 }, 00:12:12.650 "memory_domains": [ 00:12:12.650 { 00:12:12.650 "dma_device_id": "system", 00:12:12.650 "dma_device_type": 1 00:12:12.650 }, 00:12:12.650 { 00:12:12.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.650 "dma_device_type": 2 00:12:12.650 } 00:12:12.650 ], 00:12:12.650 "driver_specific": {} 00:12:12.650 } 00:12:12.650 ] 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.650 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.910 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.910 "name": "Existed_Raid", 00:12:12.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.910 "strip_size_kb": 64, 00:12:12.910 "state": "configuring", 00:12:12.910 "raid_level": "concat", 00:12:12.910 "superblock": false, 00:12:12.910 "num_base_bdevs": 2, 00:12:12.910 "num_base_bdevs_discovered": 1, 00:12:12.910 "num_base_bdevs_operational": 2, 00:12:12.910 "base_bdevs_list": [ 00:12:12.910 { 00:12:12.910 "name": "BaseBdev1", 00:12:12.910 "uuid": "ed06eecd-f1fa-4047-8a04-6bfd3e98cfae", 00:12:12.910 "is_configured": true, 00:12:12.910 "data_offset": 0, 00:12:12.910 "data_size": 65536 00:12:12.910 }, 00:12:12.910 { 00:12:12.910 "name": "BaseBdev2", 00:12:12.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.910 "is_configured": false, 00:12:12.910 "data_offset": 0, 00:12:12.910 "data_size": 0 00:12:12.910 } 00:12:12.910 ] 00:12:12.910 }' 00:12:12.910 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.910 13:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.478 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:13.478 [2024-07-26 13:11:53.959954] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:13.478 [2024-07-26 13:11:53.959989] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1147810 name Existed_Raid, state configuring 00:12:13.478 13:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:13.737 [2024-07-26 13:11:54.188586] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.737 [2024-07-26 13:11:54.189961] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:13.737 [2024-07-26 13:11:54.189992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.737 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.996 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:13.996 "name": "Existed_Raid", 00:12:13.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.996 "strip_size_kb": 64, 00:12:13.996 "state": "configuring", 00:12:13.996 "raid_level": "concat", 00:12:13.996 "superblock": false, 00:12:13.996 "num_base_bdevs": 2, 00:12:13.996 "num_base_bdevs_discovered": 1, 00:12:13.996 "num_base_bdevs_operational": 2, 00:12:13.996 "base_bdevs_list": [ 00:12:13.996 { 00:12:13.996 "name": "BaseBdev1", 00:12:13.996 "uuid": "ed06eecd-f1fa-4047-8a04-6bfd3e98cfae", 00:12:13.996 "is_configured": true, 00:12:13.996 "data_offset": 0, 00:12:13.996 "data_size": 65536 00:12:13.996 }, 00:12:13.996 { 00:12:13.996 "name": "BaseBdev2", 00:12:13.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.996 "is_configured": false, 00:12:13.996 "data_offset": 0, 00:12:13.996 "data_size": 0 00:12:13.996 } 00:12:13.996 ] 00:12:13.997 }' 00:12:13.997 13:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:13.997 13:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.565 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:14.824 [2024-07-26 13:11:55.218393] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.824 [2024-07-26 13:11:55.218424] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1148610 00:12:14.824 [2024-07-26 13:11:55.218431] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:14.824 [2024-07-26 13:11:55.218606] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12ec130 00:12:14.824 [2024-07-26 13:11:55.218717] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1148610 00:12:14.824 [2024-07-26 13:11:55.218727] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1148610 00:12:14.824 [2024-07-26 13:11:55.218869] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.824 BaseBdev2 00:12:14.824 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:14.824 13:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:14.824 13:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:14.824 13:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:14.824 13:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:14.824 13:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:14.824 13:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:15.084 13:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:15.343 [ 00:12:15.343 { 00:12:15.343 "name": "BaseBdev2", 00:12:15.343 "aliases": [ 00:12:15.343 "b9816cc7-bf7d-4cdb-a97d-1bc8cd4121e9" 00:12:15.343 ], 00:12:15.343 "product_name": "Malloc disk", 00:12:15.343 "block_size": 512, 00:12:15.343 "num_blocks": 65536, 00:12:15.343 "uuid": "b9816cc7-bf7d-4cdb-a97d-1bc8cd4121e9", 00:12:15.343 "assigned_rate_limits": { 00:12:15.343 "rw_ios_per_sec": 0, 00:12:15.343 "rw_mbytes_per_sec": 0, 00:12:15.343 "r_mbytes_per_sec": 0, 00:12:15.343 "w_mbytes_per_sec": 0 00:12:15.343 }, 00:12:15.343 "claimed": true, 00:12:15.343 "claim_type": "exclusive_write", 00:12:15.343 "zoned": false, 00:12:15.343 "supported_io_types": { 00:12:15.343 "read": true, 00:12:15.343 "write": true, 00:12:15.343 "unmap": true, 00:12:15.343 "flush": true, 00:12:15.343 "reset": true, 00:12:15.343 "nvme_admin": false, 00:12:15.343 "nvme_io": false, 00:12:15.343 "nvme_io_md": false, 00:12:15.343 "write_zeroes": true, 00:12:15.343 "zcopy": true, 00:12:15.343 "get_zone_info": false, 00:12:15.343 "zone_management": false, 00:12:15.343 "zone_append": false, 00:12:15.343 "compare": false, 00:12:15.343 "compare_and_write": false, 00:12:15.343 "abort": true, 00:12:15.343 "seek_hole": false, 00:12:15.343 "seek_data": false, 00:12:15.343 "copy": true, 00:12:15.343 "nvme_iov_md": false 00:12:15.343 }, 00:12:15.343 "memory_domains": [ 00:12:15.343 { 00:12:15.343 "dma_device_id": "system", 00:12:15.343 "dma_device_type": 1 00:12:15.343 }, 00:12:15.343 { 00:12:15.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.343 "dma_device_type": 2 00:12:15.343 } 00:12:15.343 ], 00:12:15.343 "driver_specific": {} 00:12:15.343 } 00:12:15.343 ] 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:15.343 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.344 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.602 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:15.602 "name": "Existed_Raid", 00:12:15.602 "uuid": "d28072dc-e8d8-4999-bdf1-baf530b53286", 00:12:15.602 "strip_size_kb": 64, 00:12:15.603 "state": "online", 00:12:15.603 "raid_level": "concat", 00:12:15.603 "superblock": false, 00:12:15.603 "num_base_bdevs": 2, 00:12:15.603 "num_base_bdevs_discovered": 2, 00:12:15.603 "num_base_bdevs_operational": 2, 00:12:15.603 "base_bdevs_list": [ 00:12:15.603 { 00:12:15.603 "name": "BaseBdev1", 00:12:15.603 "uuid": "ed06eecd-f1fa-4047-8a04-6bfd3e98cfae", 00:12:15.603 "is_configured": true, 00:12:15.603 "data_offset": 0, 00:12:15.603 "data_size": 65536 00:12:15.603 }, 00:12:15.603 { 00:12:15.603 "name": "BaseBdev2", 00:12:15.603 "uuid": "b9816cc7-bf7d-4cdb-a97d-1bc8cd4121e9", 00:12:15.603 "is_configured": true, 00:12:15.603 "data_offset": 0, 00:12:15.603 "data_size": 65536 00:12:15.603 } 00:12:15.603 ] 00:12:15.603 }' 00:12:15.603 13:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:15.603 13:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.171 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:16.171 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:16.171 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:16.171 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:16.171 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:16.171 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:16.171 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:16.171 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:16.171 [2024-07-26 13:11:56.654454] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.171 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:16.171 "name": "Existed_Raid", 00:12:16.171 "aliases": [ 00:12:16.171 "d28072dc-e8d8-4999-bdf1-baf530b53286" 00:12:16.171 ], 00:12:16.171 "product_name": "Raid Volume", 00:12:16.171 "block_size": 512, 00:12:16.171 "num_blocks": 131072, 00:12:16.171 "uuid": "d28072dc-e8d8-4999-bdf1-baf530b53286", 00:12:16.171 "assigned_rate_limits": { 00:12:16.171 "rw_ios_per_sec": 0, 00:12:16.171 "rw_mbytes_per_sec": 0, 00:12:16.171 "r_mbytes_per_sec": 0, 00:12:16.171 "w_mbytes_per_sec": 0 00:12:16.171 }, 00:12:16.171 "claimed": false, 00:12:16.171 "zoned": false, 00:12:16.171 "supported_io_types": { 00:12:16.171 "read": true, 00:12:16.171 "write": true, 00:12:16.171 "unmap": true, 00:12:16.171 "flush": true, 00:12:16.171 "reset": true, 00:12:16.171 "nvme_admin": false, 00:12:16.171 "nvme_io": false, 00:12:16.171 "nvme_io_md": false, 00:12:16.171 "write_zeroes": true, 00:12:16.171 "zcopy": false, 00:12:16.171 "get_zone_info": false, 00:12:16.171 "zone_management": false, 00:12:16.171 "zone_append": false, 00:12:16.171 "compare": false, 00:12:16.171 "compare_and_write": false, 00:12:16.171 "abort": false, 00:12:16.171 "seek_hole": false, 00:12:16.171 "seek_data": false, 00:12:16.171 "copy": false, 00:12:16.171 "nvme_iov_md": false 00:12:16.171 }, 00:12:16.171 "memory_domains": [ 00:12:16.171 { 00:12:16.171 "dma_device_id": "system", 00:12:16.171 "dma_device_type": 1 00:12:16.171 }, 00:12:16.171 { 00:12:16.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.171 "dma_device_type": 2 00:12:16.171 }, 00:12:16.171 { 00:12:16.171 "dma_device_id": "system", 00:12:16.171 "dma_device_type": 1 00:12:16.171 }, 00:12:16.171 { 00:12:16.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.172 "dma_device_type": 2 00:12:16.172 } 00:12:16.172 ], 00:12:16.172 "driver_specific": { 00:12:16.172 "raid": { 00:12:16.172 "uuid": "d28072dc-e8d8-4999-bdf1-baf530b53286", 00:12:16.172 "strip_size_kb": 64, 00:12:16.172 "state": "online", 00:12:16.172 "raid_level": "concat", 00:12:16.172 "superblock": false, 00:12:16.172 "num_base_bdevs": 2, 00:12:16.172 "num_base_bdevs_discovered": 2, 00:12:16.172 "num_base_bdevs_operational": 2, 00:12:16.172 "base_bdevs_list": [ 00:12:16.172 { 00:12:16.172 "name": "BaseBdev1", 00:12:16.172 "uuid": "ed06eecd-f1fa-4047-8a04-6bfd3e98cfae", 00:12:16.172 "is_configured": true, 00:12:16.172 "data_offset": 0, 00:12:16.172 "data_size": 65536 00:12:16.172 }, 00:12:16.172 { 00:12:16.172 "name": "BaseBdev2", 00:12:16.172 "uuid": "b9816cc7-bf7d-4cdb-a97d-1bc8cd4121e9", 00:12:16.172 "is_configured": true, 00:12:16.172 "data_offset": 0, 00:12:16.172 "data_size": 65536 00:12:16.172 } 00:12:16.172 ] 00:12:16.172 } 00:12:16.172 } 00:12:16.172 }' 00:12:16.172 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.475 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:16.475 BaseBdev2' 00:12:16.475 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:16.475 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:16.475 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:16.475 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:16.475 "name": "BaseBdev1", 00:12:16.475 "aliases": [ 00:12:16.475 "ed06eecd-f1fa-4047-8a04-6bfd3e98cfae" 00:12:16.475 ], 00:12:16.475 "product_name": "Malloc disk", 00:12:16.475 "block_size": 512, 00:12:16.475 "num_blocks": 65536, 00:12:16.475 "uuid": "ed06eecd-f1fa-4047-8a04-6bfd3e98cfae", 00:12:16.475 "assigned_rate_limits": { 00:12:16.475 "rw_ios_per_sec": 0, 00:12:16.475 "rw_mbytes_per_sec": 0, 00:12:16.475 "r_mbytes_per_sec": 0, 00:12:16.475 "w_mbytes_per_sec": 0 00:12:16.475 }, 00:12:16.475 "claimed": true, 00:12:16.475 "claim_type": "exclusive_write", 00:12:16.475 "zoned": false, 00:12:16.475 "supported_io_types": { 00:12:16.475 "read": true, 00:12:16.475 "write": true, 00:12:16.475 "unmap": true, 00:12:16.475 "flush": true, 00:12:16.475 "reset": true, 00:12:16.475 "nvme_admin": false, 00:12:16.475 "nvme_io": false, 00:12:16.475 "nvme_io_md": false, 00:12:16.475 "write_zeroes": true, 00:12:16.475 "zcopy": true, 00:12:16.475 "get_zone_info": false, 00:12:16.475 "zone_management": false, 00:12:16.475 "zone_append": false, 00:12:16.475 "compare": false, 00:12:16.475 "compare_and_write": false, 00:12:16.475 "abort": true, 00:12:16.475 "seek_hole": false, 00:12:16.475 "seek_data": false, 00:12:16.475 "copy": true, 00:12:16.475 "nvme_iov_md": false 00:12:16.475 }, 00:12:16.475 "memory_domains": [ 00:12:16.475 { 00:12:16.475 "dma_device_id": "system", 00:12:16.475 "dma_device_type": 1 00:12:16.475 }, 00:12:16.475 { 00:12:16.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.475 "dma_device_type": 2 00:12:16.475 } 00:12:16.476 ], 00:12:16.476 "driver_specific": {} 00:12:16.476 }' 00:12:16.476 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.755 13:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.755 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:16.755 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.755 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.755 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:16.755 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.755 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.755 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:16.755 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.755 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.013 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:17.013 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:17.013 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:17.013 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:17.013 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:17.013 "name": "BaseBdev2", 00:12:17.013 "aliases": [ 00:12:17.013 "b9816cc7-bf7d-4cdb-a97d-1bc8cd4121e9" 00:12:17.013 ], 00:12:17.013 "product_name": "Malloc disk", 00:12:17.013 "block_size": 512, 00:12:17.013 "num_blocks": 65536, 00:12:17.013 "uuid": "b9816cc7-bf7d-4cdb-a97d-1bc8cd4121e9", 00:12:17.013 "assigned_rate_limits": { 00:12:17.013 "rw_ios_per_sec": 0, 00:12:17.013 "rw_mbytes_per_sec": 0, 00:12:17.013 "r_mbytes_per_sec": 0, 00:12:17.013 "w_mbytes_per_sec": 0 00:12:17.013 }, 00:12:17.013 "claimed": true, 00:12:17.013 "claim_type": "exclusive_write", 00:12:17.013 "zoned": false, 00:12:17.013 "supported_io_types": { 00:12:17.013 "read": true, 00:12:17.013 "write": true, 00:12:17.013 "unmap": true, 00:12:17.013 "flush": true, 00:12:17.013 "reset": true, 00:12:17.013 "nvme_admin": false, 00:12:17.013 "nvme_io": false, 00:12:17.013 "nvme_io_md": false, 00:12:17.013 "write_zeroes": true, 00:12:17.013 "zcopy": true, 00:12:17.013 "get_zone_info": false, 00:12:17.013 "zone_management": false, 00:12:17.013 "zone_append": false, 00:12:17.013 "compare": false, 00:12:17.013 "compare_and_write": false, 00:12:17.013 "abort": true, 00:12:17.013 "seek_hole": false, 00:12:17.013 "seek_data": false, 00:12:17.013 "copy": true, 00:12:17.013 "nvme_iov_md": false 00:12:17.013 }, 00:12:17.013 "memory_domains": [ 00:12:17.013 { 00:12:17.013 "dma_device_id": "system", 00:12:17.013 "dma_device_type": 1 00:12:17.013 }, 00:12:17.014 { 00:12:17.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.014 "dma_device_type": 2 00:12:17.014 } 00:12:17.014 ], 00:12:17.014 "driver_specific": {} 00:12:17.014 }' 00:12:17.014 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.272 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.272 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:17.272 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.272 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.272 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:17.272 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.272 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.272 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:17.272 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.531 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.532 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:17.532 13:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:17.791 [2024-07-26 13:11:58.061954] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.791 [2024-07-26 13:11:58.061977] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.791 [2024-07-26 13:11:58.062013] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.791 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:17.791 "name": "Existed_Raid", 00:12:17.791 "uuid": "d28072dc-e8d8-4999-bdf1-baf530b53286", 00:12:17.791 "strip_size_kb": 64, 00:12:17.791 "state": "offline", 00:12:17.791 "raid_level": "concat", 00:12:17.791 "superblock": false, 00:12:17.791 "num_base_bdevs": 2, 00:12:17.791 "num_base_bdevs_discovered": 1, 00:12:17.791 "num_base_bdevs_operational": 1, 00:12:17.791 "base_bdevs_list": [ 00:12:17.791 { 00:12:17.791 "name": null, 00:12:17.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.792 "is_configured": false, 00:12:17.792 "data_offset": 0, 00:12:17.792 "data_size": 65536 00:12:17.792 }, 00:12:17.792 { 00:12:17.792 "name": "BaseBdev2", 00:12:17.792 "uuid": "b9816cc7-bf7d-4cdb-a97d-1bc8cd4121e9", 00:12:17.792 "is_configured": true, 00:12:17.792 "data_offset": 0, 00:12:17.792 "data_size": 65536 00:12:17.792 } 00:12:17.792 ] 00:12:17.792 }' 00:12:17.792 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:17.792 13:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.729 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:18.729 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:18.730 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.730 13:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:18.730 13:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:18.730 13:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:18.730 13:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:18.989 [2024-07-26 13:11:59.322223] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:18.989 [2024-07-26 13:11:59.322267] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1148610 name Existed_Raid, state offline 00:12:18.989 13:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:18.989 13:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:18.989 13:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.989 13:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 661473 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 661473 ']' 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 661473 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 661473 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 661473' 00:12:19.249 killing process with pid 661473 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 661473 00:12:19.249 [2024-07-26 13:11:59.629824] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.249 13:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 661473 00:12:19.249 [2024-07-26 13:11:59.630676] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.508 13:11:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:12:19.508 00:12:19.508 real 0m9.802s 00:12:19.508 user 0m17.391s 00:12:19.508 sys 0m1.859s 00:12:19.508 13:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.508 13:11:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.508 ************************************ 00:12:19.508 END TEST raid_state_function_test 00:12:19.508 ************************************ 00:12:19.508 13:11:59 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:12:19.508 13:11:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:19.508 13:11:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.508 13:11:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.508 ************************************ 00:12:19.508 START TEST raid_state_function_test_sb 00:12:19.509 ************************************ 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=663311 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 663311' 00:12:19.509 Process raid pid: 663311 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 663311 /var/tmp/spdk-raid.sock 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 663311 ']' 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:19.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:19.509 13:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.509 [2024-07-26 13:11:59.969481] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:12:19.509 [2024-07-26 13:11:59.969539] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:19.768 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:19.768 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:19.768 [2024-07-26 13:12:00.104427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.768 [2024-07-26 13:12:00.193965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.768 [2024-07-26 13:12:00.260384] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.768 [2024-07-26 13:12:00.260412] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.706 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.706 13:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:20.706 13:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:20.706 [2024-07-26 13:12:01.074720] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:20.706 [2024-07-26 13:12:01.074764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:20.706 [2024-07-26 13:12:01.074775] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:20.706 [2024-07-26 13:12:01.074787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:20.706 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:20.706 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:20.706 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:20.706 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:20.706 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:20.706 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:20.706 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:20.706 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:20.706 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:20.706 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:20.706 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.706 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:20.965 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:20.965 "name": "Existed_Raid", 00:12:20.965 "uuid": "bfa1c364-4c0d-408a-af86-72d280535c55", 00:12:20.965 "strip_size_kb": 64, 00:12:20.965 "state": "configuring", 00:12:20.965 "raid_level": "concat", 00:12:20.965 "superblock": true, 00:12:20.965 "num_base_bdevs": 2, 00:12:20.965 "num_base_bdevs_discovered": 0, 00:12:20.965 "num_base_bdevs_operational": 2, 00:12:20.965 "base_bdevs_list": [ 00:12:20.965 { 00:12:20.965 "name": "BaseBdev1", 00:12:20.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.965 "is_configured": false, 00:12:20.965 "data_offset": 0, 00:12:20.965 "data_size": 0 00:12:20.965 }, 00:12:20.965 { 00:12:20.965 "name": "BaseBdev2", 00:12:20.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.965 "is_configured": false, 00:12:20.965 "data_offset": 0, 00:12:20.965 "data_size": 0 00:12:20.965 } 00:12:20.965 ] 00:12:20.965 }' 00:12:20.965 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:20.965 13:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.533 13:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:21.792 [2024-07-26 13:12:02.081245] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:21.792 [2024-07-26 13:12:02.081280] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb03f20 name Existed_Raid, state configuring 00:12:21.792 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:21.792 [2024-07-26 13:12:02.305850] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:21.792 [2024-07-26 13:12:02.305879] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:21.792 [2024-07-26 13:12:02.305888] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:21.792 [2024-07-26 13:12:02.305899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:22.051 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:22.051 [2024-07-26 13:12:02.544069] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:22.051 BaseBdev1 00:12:22.051 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:22.051 13:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:22.051 13:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:22.051 13:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:22.051 13:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:22.052 13:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:22.052 13:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:22.311 13:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:22.569 [ 00:12:22.569 { 00:12:22.569 "name": "BaseBdev1", 00:12:22.569 "aliases": [ 00:12:22.569 "e4d9762a-cb26-47eb-9fbd-155d246f668f" 00:12:22.569 ], 00:12:22.569 "product_name": "Malloc disk", 00:12:22.569 "block_size": 512, 00:12:22.569 "num_blocks": 65536, 00:12:22.569 "uuid": "e4d9762a-cb26-47eb-9fbd-155d246f668f", 00:12:22.569 "assigned_rate_limits": { 00:12:22.569 "rw_ios_per_sec": 0, 00:12:22.569 "rw_mbytes_per_sec": 0, 00:12:22.569 "r_mbytes_per_sec": 0, 00:12:22.569 "w_mbytes_per_sec": 0 00:12:22.569 }, 00:12:22.569 "claimed": true, 00:12:22.569 "claim_type": "exclusive_write", 00:12:22.569 "zoned": false, 00:12:22.569 "supported_io_types": { 00:12:22.569 "read": true, 00:12:22.569 "write": true, 00:12:22.569 "unmap": true, 00:12:22.569 "flush": true, 00:12:22.569 "reset": true, 00:12:22.569 "nvme_admin": false, 00:12:22.569 "nvme_io": false, 00:12:22.569 "nvme_io_md": false, 00:12:22.569 "write_zeroes": true, 00:12:22.569 "zcopy": true, 00:12:22.569 "get_zone_info": false, 00:12:22.569 "zone_management": false, 00:12:22.569 "zone_append": false, 00:12:22.569 "compare": false, 00:12:22.569 "compare_and_write": false, 00:12:22.569 "abort": true, 00:12:22.569 "seek_hole": false, 00:12:22.569 "seek_data": false, 00:12:22.569 "copy": true, 00:12:22.569 "nvme_iov_md": false 00:12:22.569 }, 00:12:22.569 "memory_domains": [ 00:12:22.569 { 00:12:22.570 "dma_device_id": "system", 00:12:22.570 "dma_device_type": 1 00:12:22.570 }, 00:12:22.570 { 00:12:22.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.570 "dma_device_type": 2 00:12:22.570 } 00:12:22.570 ], 00:12:22.570 "driver_specific": {} 00:12:22.570 } 00:12:22.570 ] 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:22.570 13:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.829 13:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:22.829 "name": "Existed_Raid", 00:12:22.829 "uuid": "c91eca05-1e69-4215-84ac-d3a98b96837b", 00:12:22.829 "strip_size_kb": 64, 00:12:22.829 "state": "configuring", 00:12:22.829 "raid_level": "concat", 00:12:22.829 "superblock": true, 00:12:22.829 "num_base_bdevs": 2, 00:12:22.829 "num_base_bdevs_discovered": 1, 00:12:22.829 "num_base_bdevs_operational": 2, 00:12:22.829 "base_bdevs_list": [ 00:12:22.829 { 00:12:22.829 "name": "BaseBdev1", 00:12:22.829 "uuid": "e4d9762a-cb26-47eb-9fbd-155d246f668f", 00:12:22.829 "is_configured": true, 00:12:22.829 "data_offset": 2048, 00:12:22.829 "data_size": 63488 00:12:22.829 }, 00:12:22.829 { 00:12:22.829 "name": "BaseBdev2", 00:12:22.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.829 "is_configured": false, 00:12:22.829 "data_offset": 0, 00:12:22.829 "data_size": 0 00:12:22.829 } 00:12:22.829 ] 00:12:22.829 }' 00:12:22.829 13:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:22.829 13:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.396 13:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:23.654 [2024-07-26 13:12:03.963814] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.654 [2024-07-26 13:12:03.963851] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb03810 name Existed_Raid, state configuring 00:12:23.654 13:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:23.913 [2024-07-26 13:12:04.192467] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.913 [2024-07-26 13:12:04.193859] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:23.913 [2024-07-26 13:12:04.193893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.913 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:23.913 "name": "Existed_Raid", 00:12:23.913 "uuid": "e90e5b15-5197-43df-9510-dbe1f7814893", 00:12:23.913 "strip_size_kb": 64, 00:12:23.913 "state": "configuring", 00:12:23.913 "raid_level": "concat", 00:12:23.913 "superblock": true, 00:12:23.913 "num_base_bdevs": 2, 00:12:23.913 "num_base_bdevs_discovered": 1, 00:12:23.913 "num_base_bdevs_operational": 2, 00:12:23.913 "base_bdevs_list": [ 00:12:23.913 { 00:12:23.913 "name": "BaseBdev1", 00:12:23.913 "uuid": "e4d9762a-cb26-47eb-9fbd-155d246f668f", 00:12:23.913 "is_configured": true, 00:12:23.913 "data_offset": 2048, 00:12:23.913 "data_size": 63488 00:12:23.913 }, 00:12:23.913 { 00:12:23.913 "name": "BaseBdev2", 00:12:23.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.913 "is_configured": false, 00:12:23.913 "data_offset": 0, 00:12:23.913 "data_size": 0 00:12:23.913 } 00:12:23.913 ] 00:12:23.914 }' 00:12:23.914 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:23.914 13:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.482 13:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:24.741 [2024-07-26 13:12:05.154086] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.741 [2024-07-26 13:12:05.154235] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xb04610 00:12:24.741 [2024-07-26 13:12:05.154249] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:24.741 [2024-07-26 13:12:05.154413] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb04c50 00:12:24.741 [2024-07-26 13:12:05.154523] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb04610 00:12:24.741 [2024-07-26 13:12:05.154532] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb04610 00:12:24.741 [2024-07-26 13:12:05.154616] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.741 BaseBdev2 00:12:24.741 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:24.741 13:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:24.741 13:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:24.741 13:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:24.741 13:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:24.741 13:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:24.741 13:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:25.000 [ 00:12:25.000 { 00:12:25.000 "name": "BaseBdev2", 00:12:25.000 "aliases": [ 00:12:25.000 "9e3ddccf-8005-4fb8-bff1-dc3ec2621056" 00:12:25.000 ], 00:12:25.000 "product_name": "Malloc disk", 00:12:25.000 "block_size": 512, 00:12:25.000 "num_blocks": 65536, 00:12:25.000 "uuid": "9e3ddccf-8005-4fb8-bff1-dc3ec2621056", 00:12:25.000 "assigned_rate_limits": { 00:12:25.000 "rw_ios_per_sec": 0, 00:12:25.000 "rw_mbytes_per_sec": 0, 00:12:25.000 "r_mbytes_per_sec": 0, 00:12:25.000 "w_mbytes_per_sec": 0 00:12:25.000 }, 00:12:25.000 "claimed": true, 00:12:25.000 "claim_type": "exclusive_write", 00:12:25.000 "zoned": false, 00:12:25.000 "supported_io_types": { 00:12:25.000 "read": true, 00:12:25.000 "write": true, 00:12:25.000 "unmap": true, 00:12:25.000 "flush": true, 00:12:25.000 "reset": true, 00:12:25.000 "nvme_admin": false, 00:12:25.000 "nvme_io": false, 00:12:25.000 "nvme_io_md": false, 00:12:25.000 "write_zeroes": true, 00:12:25.000 "zcopy": true, 00:12:25.000 "get_zone_info": false, 00:12:25.000 "zone_management": false, 00:12:25.000 "zone_append": false, 00:12:25.000 "compare": false, 00:12:25.000 "compare_and_write": false, 00:12:25.000 "abort": true, 00:12:25.000 "seek_hole": false, 00:12:25.000 "seek_data": false, 00:12:25.000 "copy": true, 00:12:25.000 "nvme_iov_md": false 00:12:25.000 }, 00:12:25.000 "memory_domains": [ 00:12:25.000 { 00:12:25.000 "dma_device_id": "system", 00:12:25.000 "dma_device_type": 1 00:12:25.000 }, 00:12:25.000 { 00:12:25.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.000 "dma_device_type": 2 00:12:25.000 } 00:12:25.000 ], 00:12:25.000 "driver_specific": {} 00:12:25.000 } 00:12:25.000 ] 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.000 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.001 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.001 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.001 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.259 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:25.259 "name": "Existed_Raid", 00:12:25.259 "uuid": "e90e5b15-5197-43df-9510-dbe1f7814893", 00:12:25.259 "strip_size_kb": 64, 00:12:25.259 "state": "online", 00:12:25.259 "raid_level": "concat", 00:12:25.259 "superblock": true, 00:12:25.259 "num_base_bdevs": 2, 00:12:25.259 "num_base_bdevs_discovered": 2, 00:12:25.259 "num_base_bdevs_operational": 2, 00:12:25.259 "base_bdevs_list": [ 00:12:25.259 { 00:12:25.259 "name": "BaseBdev1", 00:12:25.259 "uuid": "e4d9762a-cb26-47eb-9fbd-155d246f668f", 00:12:25.259 "is_configured": true, 00:12:25.259 "data_offset": 2048, 00:12:25.259 "data_size": 63488 00:12:25.259 }, 00:12:25.259 { 00:12:25.259 "name": "BaseBdev2", 00:12:25.259 "uuid": "9e3ddccf-8005-4fb8-bff1-dc3ec2621056", 00:12:25.259 "is_configured": true, 00:12:25.259 "data_offset": 2048, 00:12:25.259 "data_size": 63488 00:12:25.259 } 00:12:25.259 ] 00:12:25.259 }' 00:12:25.259 13:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:25.259 13:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.825 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:25.825 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:25.825 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:25.825 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:25.825 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:25.825 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:25.825 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:25.825 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:26.083 [2024-07-26 13:12:06.521968] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.083 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:26.083 "name": "Existed_Raid", 00:12:26.083 "aliases": [ 00:12:26.083 "e90e5b15-5197-43df-9510-dbe1f7814893" 00:12:26.083 ], 00:12:26.083 "product_name": "Raid Volume", 00:12:26.083 "block_size": 512, 00:12:26.083 "num_blocks": 126976, 00:12:26.083 "uuid": "e90e5b15-5197-43df-9510-dbe1f7814893", 00:12:26.083 "assigned_rate_limits": { 00:12:26.083 "rw_ios_per_sec": 0, 00:12:26.083 "rw_mbytes_per_sec": 0, 00:12:26.083 "r_mbytes_per_sec": 0, 00:12:26.083 "w_mbytes_per_sec": 0 00:12:26.083 }, 00:12:26.083 "claimed": false, 00:12:26.083 "zoned": false, 00:12:26.083 "supported_io_types": { 00:12:26.083 "read": true, 00:12:26.083 "write": true, 00:12:26.083 "unmap": true, 00:12:26.083 "flush": true, 00:12:26.083 "reset": true, 00:12:26.083 "nvme_admin": false, 00:12:26.083 "nvme_io": false, 00:12:26.083 "nvme_io_md": false, 00:12:26.083 "write_zeroes": true, 00:12:26.083 "zcopy": false, 00:12:26.083 "get_zone_info": false, 00:12:26.083 "zone_management": false, 00:12:26.083 "zone_append": false, 00:12:26.083 "compare": false, 00:12:26.083 "compare_and_write": false, 00:12:26.083 "abort": false, 00:12:26.083 "seek_hole": false, 00:12:26.083 "seek_data": false, 00:12:26.083 "copy": false, 00:12:26.083 "nvme_iov_md": false 00:12:26.083 }, 00:12:26.083 "memory_domains": [ 00:12:26.083 { 00:12:26.083 "dma_device_id": "system", 00:12:26.083 "dma_device_type": 1 00:12:26.083 }, 00:12:26.083 { 00:12:26.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.083 "dma_device_type": 2 00:12:26.083 }, 00:12:26.083 { 00:12:26.083 "dma_device_id": "system", 00:12:26.083 "dma_device_type": 1 00:12:26.083 }, 00:12:26.083 { 00:12:26.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.083 "dma_device_type": 2 00:12:26.083 } 00:12:26.083 ], 00:12:26.083 "driver_specific": { 00:12:26.083 "raid": { 00:12:26.083 "uuid": "e90e5b15-5197-43df-9510-dbe1f7814893", 00:12:26.083 "strip_size_kb": 64, 00:12:26.083 "state": "online", 00:12:26.083 "raid_level": "concat", 00:12:26.083 "superblock": true, 00:12:26.083 "num_base_bdevs": 2, 00:12:26.083 "num_base_bdevs_discovered": 2, 00:12:26.083 "num_base_bdevs_operational": 2, 00:12:26.083 "base_bdevs_list": [ 00:12:26.083 { 00:12:26.083 "name": "BaseBdev1", 00:12:26.083 "uuid": "e4d9762a-cb26-47eb-9fbd-155d246f668f", 00:12:26.083 "is_configured": true, 00:12:26.083 "data_offset": 2048, 00:12:26.083 "data_size": 63488 00:12:26.083 }, 00:12:26.083 { 00:12:26.083 "name": "BaseBdev2", 00:12:26.083 "uuid": "9e3ddccf-8005-4fb8-bff1-dc3ec2621056", 00:12:26.083 "is_configured": true, 00:12:26.083 "data_offset": 2048, 00:12:26.083 "data_size": 63488 00:12:26.083 } 00:12:26.083 ] 00:12:26.083 } 00:12:26.083 } 00:12:26.083 }' 00:12:26.083 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:26.083 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:26.083 BaseBdev2' 00:12:26.083 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:26.084 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:26.084 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:26.342 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:26.342 "name": "BaseBdev1", 00:12:26.342 "aliases": [ 00:12:26.342 "e4d9762a-cb26-47eb-9fbd-155d246f668f" 00:12:26.342 ], 00:12:26.342 "product_name": "Malloc disk", 00:12:26.342 "block_size": 512, 00:12:26.342 "num_blocks": 65536, 00:12:26.342 "uuid": "e4d9762a-cb26-47eb-9fbd-155d246f668f", 00:12:26.342 "assigned_rate_limits": { 00:12:26.342 "rw_ios_per_sec": 0, 00:12:26.342 "rw_mbytes_per_sec": 0, 00:12:26.342 "r_mbytes_per_sec": 0, 00:12:26.342 "w_mbytes_per_sec": 0 00:12:26.342 }, 00:12:26.342 "claimed": true, 00:12:26.342 "claim_type": "exclusive_write", 00:12:26.342 "zoned": false, 00:12:26.342 "supported_io_types": { 00:12:26.342 "read": true, 00:12:26.342 "write": true, 00:12:26.342 "unmap": true, 00:12:26.342 "flush": true, 00:12:26.342 "reset": true, 00:12:26.342 "nvme_admin": false, 00:12:26.342 "nvme_io": false, 00:12:26.342 "nvme_io_md": false, 00:12:26.342 "write_zeroes": true, 00:12:26.342 "zcopy": true, 00:12:26.342 "get_zone_info": false, 00:12:26.342 "zone_management": false, 00:12:26.342 "zone_append": false, 00:12:26.342 "compare": false, 00:12:26.342 "compare_and_write": false, 00:12:26.342 "abort": true, 00:12:26.342 "seek_hole": false, 00:12:26.342 "seek_data": false, 00:12:26.342 "copy": true, 00:12:26.342 "nvme_iov_md": false 00:12:26.342 }, 00:12:26.342 "memory_domains": [ 00:12:26.342 { 00:12:26.342 "dma_device_id": "system", 00:12:26.342 "dma_device_type": 1 00:12:26.342 }, 00:12:26.342 { 00:12:26.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.342 "dma_device_type": 2 00:12:26.342 } 00:12:26.342 ], 00:12:26.342 "driver_specific": {} 00:12:26.342 }' 00:12:26.342 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:26.342 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:26.600 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:26.600 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:26.600 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:26.600 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:26.600 13:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:26.600 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:26.600 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:26.600 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.600 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:26.858 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:26.858 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:26.858 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:26.858 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:26.858 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:26.858 "name": "BaseBdev2", 00:12:26.858 "aliases": [ 00:12:26.858 "9e3ddccf-8005-4fb8-bff1-dc3ec2621056" 00:12:26.858 ], 00:12:26.858 "product_name": "Malloc disk", 00:12:26.858 "block_size": 512, 00:12:26.858 "num_blocks": 65536, 00:12:26.858 "uuid": "9e3ddccf-8005-4fb8-bff1-dc3ec2621056", 00:12:26.858 "assigned_rate_limits": { 00:12:26.858 "rw_ios_per_sec": 0, 00:12:26.858 "rw_mbytes_per_sec": 0, 00:12:26.858 "r_mbytes_per_sec": 0, 00:12:26.858 "w_mbytes_per_sec": 0 00:12:26.858 }, 00:12:26.858 "claimed": true, 00:12:26.858 "claim_type": "exclusive_write", 00:12:26.858 "zoned": false, 00:12:26.858 "supported_io_types": { 00:12:26.858 "read": true, 00:12:26.858 "write": true, 00:12:26.858 "unmap": true, 00:12:26.858 "flush": true, 00:12:26.858 "reset": true, 00:12:26.858 "nvme_admin": false, 00:12:26.858 "nvme_io": false, 00:12:26.858 "nvme_io_md": false, 00:12:26.858 "write_zeroes": true, 00:12:26.858 "zcopy": true, 00:12:26.858 "get_zone_info": false, 00:12:26.858 "zone_management": false, 00:12:26.858 "zone_append": false, 00:12:26.858 "compare": false, 00:12:26.858 "compare_and_write": false, 00:12:26.858 "abort": true, 00:12:26.858 "seek_hole": false, 00:12:26.858 "seek_data": false, 00:12:26.858 "copy": true, 00:12:26.858 "nvme_iov_md": false 00:12:26.858 }, 00:12:26.858 "memory_domains": [ 00:12:26.858 { 00:12:26.858 "dma_device_id": "system", 00:12:26.858 "dma_device_type": 1 00:12:26.858 }, 00:12:26.858 { 00:12:26.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.858 "dma_device_type": 2 00:12:26.858 } 00:12:26.858 ], 00:12:26.858 "driver_specific": {} 00:12:26.858 }' 00:12:26.858 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:27.116 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:27.116 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:27.116 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:27.116 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:27.116 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:27.116 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:27.116 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:27.116 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:27.116 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:27.374 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:27.374 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:27.374 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:27.633 [2024-07-26 13:12:07.909413] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:27.633 [2024-07-26 13:12:07.909437] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.633 [2024-07-26 13:12:07.909477] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.633 13:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.891 13:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.891 "name": "Existed_Raid", 00:12:27.891 "uuid": "e90e5b15-5197-43df-9510-dbe1f7814893", 00:12:27.891 "strip_size_kb": 64, 00:12:27.891 "state": "offline", 00:12:27.891 "raid_level": "concat", 00:12:27.891 "superblock": true, 00:12:27.891 "num_base_bdevs": 2, 00:12:27.891 "num_base_bdevs_discovered": 1, 00:12:27.891 "num_base_bdevs_operational": 1, 00:12:27.891 "base_bdevs_list": [ 00:12:27.891 { 00:12:27.891 "name": null, 00:12:27.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.891 "is_configured": false, 00:12:27.891 "data_offset": 2048, 00:12:27.891 "data_size": 63488 00:12:27.891 }, 00:12:27.891 { 00:12:27.891 "name": "BaseBdev2", 00:12:27.891 "uuid": "9e3ddccf-8005-4fb8-bff1-dc3ec2621056", 00:12:27.891 "is_configured": true, 00:12:27.891 "data_offset": 2048, 00:12:27.891 "data_size": 63488 00:12:27.891 } 00:12:27.891 ] 00:12:27.891 }' 00:12:27.891 13:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.891 13:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.458 13:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:28.458 13:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:28.458 13:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.458 13:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:28.458 13:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:28.458 13:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:28.458 13:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:28.716 [2024-07-26 13:12:09.173824] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:28.716 [2024-07-26 13:12:09.173872] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb04610 name Existed_Raid, state offline 00:12:28.716 13:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:28.716 13:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:28.716 13:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.716 13:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 663311 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 663311 ']' 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 663311 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 663311 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 663311' 00:12:28.975 killing process with pid 663311 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 663311 00:12:28.975 [2024-07-26 13:12:09.488635] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:28.975 13:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 663311 00:12:28.975 [2024-07-26 13:12:09.489494] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.234 13:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:29.234 00:12:29.234 real 0m9.776s 00:12:29.234 user 0m17.347s 00:12:29.234 sys 0m1.836s 00:12:29.234 13:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.234 13:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.234 ************************************ 00:12:29.234 END TEST raid_state_function_test_sb 00:12:29.234 ************************************ 00:12:29.234 13:12:09 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:12:29.234 13:12:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:29.234 13:12:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.234 13:12:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.497 ************************************ 00:12:29.497 START TEST raid_superblock_test 00:12:29.497 ************************************ 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=665899 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 665899 /var/tmp/spdk-raid.sock 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 665899 ']' 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:29.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:29.497 13:12:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.497 [2024-07-26 13:12:09.829131] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:12:29.497 [2024-07-26 13:12:09.829195] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665899 ] 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:29.497 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:29.497 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:29.497 [2024-07-26 13:12:09.963099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.793 [2024-07-26 13:12:10.055536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.793 [2024-07-26 13:12:10.110768] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.793 [2024-07-26 13:12:10.110800] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:30.360 13:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.360 13:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:30.360 13:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:12:30.360 13:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:30.360 13:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:12:30.360 13:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:12:30.360 13:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:30.360 13:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:30.360 13:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:30.360 13:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:30.360 13:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:30.619 malloc1 00:12:30.619 13:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:30.879 [2024-07-26 13:12:11.166663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:30.879 [2024-07-26 13:12:11.166708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.879 [2024-07-26 13:12:11.166728] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12da2f0 00:12:30.879 [2024-07-26 13:12:11.166742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.879 [2024-07-26 13:12:11.168250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.879 [2024-07-26 13:12:11.168278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:30.879 pt1 00:12:30.879 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:30.879 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:30.879 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:12:30.879 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:12:30.879 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:30.879 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:30.879 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:12:30.879 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:30.879 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:30.879 malloc2 00:12:31.138 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:31.138 [2024-07-26 13:12:11.628407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:31.138 [2024-07-26 13:12:11.628446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.138 [2024-07-26 13:12:11.628462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12db6d0 00:12:31.138 [2024-07-26 13:12:11.628474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.138 [2024-07-26 13:12:11.629907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.138 [2024-07-26 13:12:11.629934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:31.138 pt2 00:12:31.138 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:12:31.138 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:12:31.138 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:12:31.397 [2024-07-26 13:12:11.853019] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:31.397 [2024-07-26 13:12:11.854192] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.397 [2024-07-26 13:12:11.854309] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1474310 00:12:31.397 [2024-07-26 13:12:11.854321] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:31.397 [2024-07-26 13:12:11.854507] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12d3400 00:12:31.397 [2024-07-26 13:12:11.854630] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1474310 00:12:31.397 [2024-07-26 13:12:11.854640] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1474310 00:12:31.397 [2024-07-26 13:12:11.854741] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.397 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:31.397 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:31.397 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:31.397 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:31.397 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:31.397 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:31.397 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:31.397 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:31.397 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:31.397 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:31.397 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.397 13:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.657 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:31.657 "name": "raid_bdev1", 00:12:31.657 "uuid": "2428de3e-dfce-48bb-8580-03467c223233", 00:12:31.657 "strip_size_kb": 64, 00:12:31.657 "state": "online", 00:12:31.657 "raid_level": "concat", 00:12:31.657 "superblock": true, 00:12:31.657 "num_base_bdevs": 2, 00:12:31.657 "num_base_bdevs_discovered": 2, 00:12:31.657 "num_base_bdevs_operational": 2, 00:12:31.657 "base_bdevs_list": [ 00:12:31.657 { 00:12:31.657 "name": "pt1", 00:12:31.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:31.657 "is_configured": true, 00:12:31.657 "data_offset": 2048, 00:12:31.657 "data_size": 63488 00:12:31.657 }, 00:12:31.657 { 00:12:31.657 "name": "pt2", 00:12:31.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:31.657 "is_configured": true, 00:12:31.657 "data_offset": 2048, 00:12:31.657 "data_size": 63488 00:12:31.657 } 00:12:31.657 ] 00:12:31.657 }' 00:12:31.657 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:31.657 13:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.225 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:12:32.225 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:32.225 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:32.225 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:32.225 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:32.225 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:32.225 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:32.225 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:32.484 [2024-07-26 13:12:12.863886] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.484 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:32.484 "name": "raid_bdev1", 00:12:32.484 "aliases": [ 00:12:32.484 "2428de3e-dfce-48bb-8580-03467c223233" 00:12:32.484 ], 00:12:32.484 "product_name": "Raid Volume", 00:12:32.484 "block_size": 512, 00:12:32.484 "num_blocks": 126976, 00:12:32.484 "uuid": "2428de3e-dfce-48bb-8580-03467c223233", 00:12:32.484 "assigned_rate_limits": { 00:12:32.484 "rw_ios_per_sec": 0, 00:12:32.484 "rw_mbytes_per_sec": 0, 00:12:32.484 "r_mbytes_per_sec": 0, 00:12:32.484 "w_mbytes_per_sec": 0 00:12:32.484 }, 00:12:32.484 "claimed": false, 00:12:32.484 "zoned": false, 00:12:32.484 "supported_io_types": { 00:12:32.484 "read": true, 00:12:32.484 "write": true, 00:12:32.484 "unmap": true, 00:12:32.484 "flush": true, 00:12:32.484 "reset": true, 00:12:32.484 "nvme_admin": false, 00:12:32.484 "nvme_io": false, 00:12:32.484 "nvme_io_md": false, 00:12:32.484 "write_zeroes": true, 00:12:32.484 "zcopy": false, 00:12:32.484 "get_zone_info": false, 00:12:32.484 "zone_management": false, 00:12:32.484 "zone_append": false, 00:12:32.484 "compare": false, 00:12:32.484 "compare_and_write": false, 00:12:32.484 "abort": false, 00:12:32.484 "seek_hole": false, 00:12:32.484 "seek_data": false, 00:12:32.484 "copy": false, 00:12:32.484 "nvme_iov_md": false 00:12:32.484 }, 00:12:32.484 "memory_domains": [ 00:12:32.484 { 00:12:32.484 "dma_device_id": "system", 00:12:32.484 "dma_device_type": 1 00:12:32.484 }, 00:12:32.484 { 00:12:32.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.484 "dma_device_type": 2 00:12:32.484 }, 00:12:32.484 { 00:12:32.484 "dma_device_id": "system", 00:12:32.484 "dma_device_type": 1 00:12:32.484 }, 00:12:32.484 { 00:12:32.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.484 "dma_device_type": 2 00:12:32.484 } 00:12:32.484 ], 00:12:32.484 "driver_specific": { 00:12:32.484 "raid": { 00:12:32.484 "uuid": "2428de3e-dfce-48bb-8580-03467c223233", 00:12:32.484 "strip_size_kb": 64, 00:12:32.484 "state": "online", 00:12:32.484 "raid_level": "concat", 00:12:32.484 "superblock": true, 00:12:32.484 "num_base_bdevs": 2, 00:12:32.484 "num_base_bdevs_discovered": 2, 00:12:32.484 "num_base_bdevs_operational": 2, 00:12:32.484 "base_bdevs_list": [ 00:12:32.484 { 00:12:32.484 "name": "pt1", 00:12:32.484 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.484 "is_configured": true, 00:12:32.484 "data_offset": 2048, 00:12:32.484 "data_size": 63488 00:12:32.484 }, 00:12:32.484 { 00:12:32.484 "name": "pt2", 00:12:32.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.484 "is_configured": true, 00:12:32.484 "data_offset": 2048, 00:12:32.484 "data_size": 63488 00:12:32.484 } 00:12:32.484 ] 00:12:32.484 } 00:12:32.484 } 00:12:32.484 }' 00:12:32.484 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:32.484 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:32.484 pt2' 00:12:32.484 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:32.484 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:32.484 13:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:32.743 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:32.743 "name": "pt1", 00:12:32.743 "aliases": [ 00:12:32.743 "00000000-0000-0000-0000-000000000001" 00:12:32.743 ], 00:12:32.743 "product_name": "passthru", 00:12:32.743 "block_size": 512, 00:12:32.743 "num_blocks": 65536, 00:12:32.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:32.743 "assigned_rate_limits": { 00:12:32.743 "rw_ios_per_sec": 0, 00:12:32.743 "rw_mbytes_per_sec": 0, 00:12:32.743 "r_mbytes_per_sec": 0, 00:12:32.743 "w_mbytes_per_sec": 0 00:12:32.743 }, 00:12:32.743 "claimed": true, 00:12:32.743 "claim_type": "exclusive_write", 00:12:32.743 "zoned": false, 00:12:32.743 "supported_io_types": { 00:12:32.743 "read": true, 00:12:32.743 "write": true, 00:12:32.743 "unmap": true, 00:12:32.743 "flush": true, 00:12:32.743 "reset": true, 00:12:32.743 "nvme_admin": false, 00:12:32.743 "nvme_io": false, 00:12:32.743 "nvme_io_md": false, 00:12:32.743 "write_zeroes": true, 00:12:32.743 "zcopy": true, 00:12:32.743 "get_zone_info": false, 00:12:32.743 "zone_management": false, 00:12:32.743 "zone_append": false, 00:12:32.743 "compare": false, 00:12:32.743 "compare_and_write": false, 00:12:32.743 "abort": true, 00:12:32.743 "seek_hole": false, 00:12:32.743 "seek_data": false, 00:12:32.743 "copy": true, 00:12:32.743 "nvme_iov_md": false 00:12:32.743 }, 00:12:32.743 "memory_domains": [ 00:12:32.743 { 00:12:32.743 "dma_device_id": "system", 00:12:32.743 "dma_device_type": 1 00:12:32.743 }, 00:12:32.743 { 00:12:32.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.743 "dma_device_type": 2 00:12:32.743 } 00:12:32.743 ], 00:12:32.743 "driver_specific": { 00:12:32.743 "passthru": { 00:12:32.743 "name": "pt1", 00:12:32.743 "base_bdev_name": "malloc1" 00:12:32.743 } 00:12:32.743 } 00:12:32.743 }' 00:12:32.743 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:32.743 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:32.743 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:32.743 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:33.002 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:33.002 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:33.002 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:33.002 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:33.002 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:33.002 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:33.002 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:33.002 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:33.002 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:33.002 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:33.002 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:33.261 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:33.261 "name": "pt2", 00:12:33.261 "aliases": [ 00:12:33.261 "00000000-0000-0000-0000-000000000002" 00:12:33.261 ], 00:12:33.261 "product_name": "passthru", 00:12:33.261 "block_size": 512, 00:12:33.261 "num_blocks": 65536, 00:12:33.261 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:33.261 "assigned_rate_limits": { 00:12:33.261 "rw_ios_per_sec": 0, 00:12:33.261 "rw_mbytes_per_sec": 0, 00:12:33.261 "r_mbytes_per_sec": 0, 00:12:33.261 "w_mbytes_per_sec": 0 00:12:33.261 }, 00:12:33.261 "claimed": true, 00:12:33.261 "claim_type": "exclusive_write", 00:12:33.261 "zoned": false, 00:12:33.261 "supported_io_types": { 00:12:33.261 "read": true, 00:12:33.261 "write": true, 00:12:33.261 "unmap": true, 00:12:33.261 "flush": true, 00:12:33.261 "reset": true, 00:12:33.261 "nvme_admin": false, 00:12:33.261 "nvme_io": false, 00:12:33.261 "nvme_io_md": false, 00:12:33.261 "write_zeroes": true, 00:12:33.261 "zcopy": true, 00:12:33.261 "get_zone_info": false, 00:12:33.261 "zone_management": false, 00:12:33.261 "zone_append": false, 00:12:33.261 "compare": false, 00:12:33.261 "compare_and_write": false, 00:12:33.261 "abort": true, 00:12:33.261 "seek_hole": false, 00:12:33.261 "seek_data": false, 00:12:33.261 "copy": true, 00:12:33.261 "nvme_iov_md": false 00:12:33.261 }, 00:12:33.261 "memory_domains": [ 00:12:33.261 { 00:12:33.261 "dma_device_id": "system", 00:12:33.261 "dma_device_type": 1 00:12:33.261 }, 00:12:33.261 { 00:12:33.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.261 "dma_device_type": 2 00:12:33.261 } 00:12:33.261 ], 00:12:33.261 "driver_specific": { 00:12:33.261 "passthru": { 00:12:33.261 "name": "pt2", 00:12:33.261 "base_bdev_name": "malloc2" 00:12:33.261 } 00:12:33.261 } 00:12:33.261 }' 00:12:33.261 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:33.261 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:33.261 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:33.261 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:33.521 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:33.521 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:33.521 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:33.521 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:33.521 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:33.521 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:33.521 13:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:33.521 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:33.521 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:33.521 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:12:33.780 [2024-07-26 13:12:14.243538] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.780 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=2428de3e-dfce-48bb-8580-03467c223233 00:12:33.780 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 2428de3e-dfce-48bb-8580-03467c223233 ']' 00:12:33.780 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:34.040 [2024-07-26 13:12:14.471905] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.040 [2024-07-26 13:12:14.471926] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.040 [2024-07-26 13:12:14.471977] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.040 [2024-07-26 13:12:14.472021] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.040 [2024-07-26 13:12:14.472032] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1474310 name raid_bdev1, state offline 00:12:34.040 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.040 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:12:34.299 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:12:34.299 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:12:34.299 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:34.299 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:34.558 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:12:34.558 13:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:34.817 13:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:34.817 13:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:35.076 13:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:12:35.077 13:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:12:35.077 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:35.077 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:12:35.077 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:35.077 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.077 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:35.077 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.077 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:35.077 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.077 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:35.077 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:35.077 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:12:35.077 [2024-07-26 13:12:15.598834] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:35.077 [2024-07-26 13:12:15.600079] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:35.077 [2024-07-26 13:12:15.600133] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:35.077 [2024-07-26 13:12:15.600177] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:35.077 [2024-07-26 13:12:15.600195] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.077 [2024-07-26 13:12:15.600205] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12d10e0 name raid_bdev1, state configuring 00:12:35.336 request: 00:12:35.336 { 00:12:35.336 "name": "raid_bdev1", 00:12:35.336 "raid_level": "concat", 00:12:35.336 "base_bdevs": [ 00:12:35.336 "malloc1", 00:12:35.336 "malloc2" 00:12:35.336 ], 00:12:35.336 "strip_size_kb": 64, 00:12:35.336 "superblock": false, 00:12:35.336 "method": "bdev_raid_create", 00:12:35.336 "req_id": 1 00:12:35.336 } 00:12:35.336 Got JSON-RPC error response 00:12:35.336 response: 00:12:35.336 { 00:12:35.336 "code": -17, 00:12:35.336 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:35.336 } 00:12:35.336 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:35.336 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.336 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.336 13:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.336 13:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.336 13:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:12:35.336 13:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:12:35.336 13:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:12:35.336 13:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:35.595 [2024-07-26 13:12:16.051962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:35.595 [2024-07-26 13:12:16.052002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.595 [2024-07-26 13:12:16.052019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x147dd70 00:12:35.595 [2024-07-26 13:12:16.052030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.595 [2024-07-26 13:12:16.053476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.595 [2024-07-26 13:12:16.053503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:35.595 [2024-07-26 13:12:16.053564] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:35.595 [2024-07-26 13:12:16.053588] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:35.595 pt1 00:12:35.595 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:12:35.595 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:35.595 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:35.595 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:35.595 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:35.595 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:35.595 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.595 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.595 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.595 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.595 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.595 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.855 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.855 "name": "raid_bdev1", 00:12:35.855 "uuid": "2428de3e-dfce-48bb-8580-03467c223233", 00:12:35.855 "strip_size_kb": 64, 00:12:35.855 "state": "configuring", 00:12:35.855 "raid_level": "concat", 00:12:35.855 "superblock": true, 00:12:35.855 "num_base_bdevs": 2, 00:12:35.855 "num_base_bdevs_discovered": 1, 00:12:35.855 "num_base_bdevs_operational": 2, 00:12:35.855 "base_bdevs_list": [ 00:12:35.855 { 00:12:35.855 "name": "pt1", 00:12:35.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:35.855 "is_configured": true, 00:12:35.855 "data_offset": 2048, 00:12:35.855 "data_size": 63488 00:12:35.855 }, 00:12:35.855 { 00:12:35.855 "name": null, 00:12:35.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.855 "is_configured": false, 00:12:35.855 "data_offset": 2048, 00:12:35.855 "data_size": 63488 00:12:35.855 } 00:12:35.855 ] 00:12:35.855 }' 00:12:35.855 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.855 13:12:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.423 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:12:36.424 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:12:36.424 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:36.424 13:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:36.683 [2024-07-26 13:12:17.066639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:36.683 [2024-07-26 13:12:17.066679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.683 [2024-07-26 13:12:17.066695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12d3740 00:12:36.683 [2024-07-26 13:12:17.066706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.683 [2024-07-26 13:12:17.067010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.683 [2024-07-26 13:12:17.067026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:36.683 [2024-07-26 13:12:17.067080] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:36.683 [2024-07-26 13:12:17.067098] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:36.683 [2024-07-26 13:12:17.067195] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1474d10 00:12:36.684 [2024-07-26 13:12:17.067205] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:36.684 [2024-07-26 13:12:17.067353] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1473a30 00:12:36.684 [2024-07-26 13:12:17.067468] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1474d10 00:12:36.684 [2024-07-26 13:12:17.067478] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1474d10 00:12:36.684 [2024-07-26 13:12:17.067565] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.684 pt2 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.684 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.943 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:36.943 "name": "raid_bdev1", 00:12:36.943 "uuid": "2428de3e-dfce-48bb-8580-03467c223233", 00:12:36.943 "strip_size_kb": 64, 00:12:36.943 "state": "online", 00:12:36.943 "raid_level": "concat", 00:12:36.943 "superblock": true, 00:12:36.943 "num_base_bdevs": 2, 00:12:36.943 "num_base_bdevs_discovered": 2, 00:12:36.943 "num_base_bdevs_operational": 2, 00:12:36.943 "base_bdevs_list": [ 00:12:36.943 { 00:12:36.943 "name": "pt1", 00:12:36.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:36.943 "is_configured": true, 00:12:36.943 "data_offset": 2048, 00:12:36.943 "data_size": 63488 00:12:36.943 }, 00:12:36.943 { 00:12:36.943 "name": "pt2", 00:12:36.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:36.943 "is_configured": true, 00:12:36.943 "data_offset": 2048, 00:12:36.943 "data_size": 63488 00:12:36.943 } 00:12:36.943 ] 00:12:36.943 }' 00:12:36.943 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:36.943 13:12:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.511 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:12:37.511 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:37.511 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:37.511 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:37.511 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:37.511 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:37.511 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:37.511 13:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:37.769 [2024-07-26 13:12:18.101585] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.769 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:37.769 "name": "raid_bdev1", 00:12:37.769 "aliases": [ 00:12:37.769 "2428de3e-dfce-48bb-8580-03467c223233" 00:12:37.769 ], 00:12:37.769 "product_name": "Raid Volume", 00:12:37.769 "block_size": 512, 00:12:37.769 "num_blocks": 126976, 00:12:37.769 "uuid": "2428de3e-dfce-48bb-8580-03467c223233", 00:12:37.769 "assigned_rate_limits": { 00:12:37.769 "rw_ios_per_sec": 0, 00:12:37.769 "rw_mbytes_per_sec": 0, 00:12:37.769 "r_mbytes_per_sec": 0, 00:12:37.769 "w_mbytes_per_sec": 0 00:12:37.769 }, 00:12:37.769 "claimed": false, 00:12:37.769 "zoned": false, 00:12:37.769 "supported_io_types": { 00:12:37.769 "read": true, 00:12:37.769 "write": true, 00:12:37.769 "unmap": true, 00:12:37.769 "flush": true, 00:12:37.769 "reset": true, 00:12:37.769 "nvme_admin": false, 00:12:37.769 "nvme_io": false, 00:12:37.769 "nvme_io_md": false, 00:12:37.769 "write_zeroes": true, 00:12:37.769 "zcopy": false, 00:12:37.769 "get_zone_info": false, 00:12:37.769 "zone_management": false, 00:12:37.769 "zone_append": false, 00:12:37.769 "compare": false, 00:12:37.769 "compare_and_write": false, 00:12:37.769 "abort": false, 00:12:37.769 "seek_hole": false, 00:12:37.770 "seek_data": false, 00:12:37.770 "copy": false, 00:12:37.770 "nvme_iov_md": false 00:12:37.770 }, 00:12:37.770 "memory_domains": [ 00:12:37.770 { 00:12:37.770 "dma_device_id": "system", 00:12:37.770 "dma_device_type": 1 00:12:37.770 }, 00:12:37.770 { 00:12:37.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.770 "dma_device_type": 2 00:12:37.770 }, 00:12:37.770 { 00:12:37.770 "dma_device_id": "system", 00:12:37.770 "dma_device_type": 1 00:12:37.770 }, 00:12:37.770 { 00:12:37.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.770 "dma_device_type": 2 00:12:37.770 } 00:12:37.770 ], 00:12:37.770 "driver_specific": { 00:12:37.770 "raid": { 00:12:37.770 "uuid": "2428de3e-dfce-48bb-8580-03467c223233", 00:12:37.770 "strip_size_kb": 64, 00:12:37.770 "state": "online", 00:12:37.770 "raid_level": "concat", 00:12:37.770 "superblock": true, 00:12:37.770 "num_base_bdevs": 2, 00:12:37.770 "num_base_bdevs_discovered": 2, 00:12:37.770 "num_base_bdevs_operational": 2, 00:12:37.770 "base_bdevs_list": [ 00:12:37.770 { 00:12:37.770 "name": "pt1", 00:12:37.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:37.770 "is_configured": true, 00:12:37.770 "data_offset": 2048, 00:12:37.770 "data_size": 63488 00:12:37.770 }, 00:12:37.770 { 00:12:37.770 "name": "pt2", 00:12:37.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:37.770 "is_configured": true, 00:12:37.770 "data_offset": 2048, 00:12:37.770 "data_size": 63488 00:12:37.770 } 00:12:37.770 ] 00:12:37.770 } 00:12:37.770 } 00:12:37.770 }' 00:12:37.770 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.770 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:37.770 pt2' 00:12:37.770 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:37.770 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:37.770 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:38.028 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:38.028 "name": "pt1", 00:12:38.028 "aliases": [ 00:12:38.028 "00000000-0000-0000-0000-000000000001" 00:12:38.028 ], 00:12:38.028 "product_name": "passthru", 00:12:38.028 "block_size": 512, 00:12:38.028 "num_blocks": 65536, 00:12:38.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:38.028 "assigned_rate_limits": { 00:12:38.028 "rw_ios_per_sec": 0, 00:12:38.028 "rw_mbytes_per_sec": 0, 00:12:38.028 "r_mbytes_per_sec": 0, 00:12:38.028 "w_mbytes_per_sec": 0 00:12:38.028 }, 00:12:38.028 "claimed": true, 00:12:38.028 "claim_type": "exclusive_write", 00:12:38.028 "zoned": false, 00:12:38.028 "supported_io_types": { 00:12:38.028 "read": true, 00:12:38.028 "write": true, 00:12:38.028 "unmap": true, 00:12:38.028 "flush": true, 00:12:38.028 "reset": true, 00:12:38.028 "nvme_admin": false, 00:12:38.028 "nvme_io": false, 00:12:38.028 "nvme_io_md": false, 00:12:38.028 "write_zeroes": true, 00:12:38.028 "zcopy": true, 00:12:38.028 "get_zone_info": false, 00:12:38.028 "zone_management": false, 00:12:38.028 "zone_append": false, 00:12:38.028 "compare": false, 00:12:38.028 "compare_and_write": false, 00:12:38.028 "abort": true, 00:12:38.028 "seek_hole": false, 00:12:38.028 "seek_data": false, 00:12:38.028 "copy": true, 00:12:38.028 "nvme_iov_md": false 00:12:38.028 }, 00:12:38.028 "memory_domains": [ 00:12:38.028 { 00:12:38.028 "dma_device_id": "system", 00:12:38.028 "dma_device_type": 1 00:12:38.028 }, 00:12:38.028 { 00:12:38.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.028 "dma_device_type": 2 00:12:38.028 } 00:12:38.028 ], 00:12:38.028 "driver_specific": { 00:12:38.028 "passthru": { 00:12:38.028 "name": "pt1", 00:12:38.028 "base_bdev_name": "malloc1" 00:12:38.028 } 00:12:38.028 } 00:12:38.028 }' 00:12:38.028 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.029 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.029 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:38.029 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.029 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.288 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:38.288 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.288 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.288 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:38.288 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.288 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.288 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:38.288 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:38.288 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:38.288 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:38.546 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:38.546 "name": "pt2", 00:12:38.546 "aliases": [ 00:12:38.546 "00000000-0000-0000-0000-000000000002" 00:12:38.546 ], 00:12:38.546 "product_name": "passthru", 00:12:38.546 "block_size": 512, 00:12:38.546 "num_blocks": 65536, 00:12:38.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:38.546 "assigned_rate_limits": { 00:12:38.546 "rw_ios_per_sec": 0, 00:12:38.546 "rw_mbytes_per_sec": 0, 00:12:38.546 "r_mbytes_per_sec": 0, 00:12:38.546 "w_mbytes_per_sec": 0 00:12:38.546 }, 00:12:38.546 "claimed": true, 00:12:38.546 "claim_type": "exclusive_write", 00:12:38.546 "zoned": false, 00:12:38.546 "supported_io_types": { 00:12:38.546 "read": true, 00:12:38.546 "write": true, 00:12:38.546 "unmap": true, 00:12:38.546 "flush": true, 00:12:38.546 "reset": true, 00:12:38.546 "nvme_admin": false, 00:12:38.546 "nvme_io": false, 00:12:38.546 "nvme_io_md": false, 00:12:38.546 "write_zeroes": true, 00:12:38.546 "zcopy": true, 00:12:38.546 "get_zone_info": false, 00:12:38.546 "zone_management": false, 00:12:38.546 "zone_append": false, 00:12:38.546 "compare": false, 00:12:38.546 "compare_and_write": false, 00:12:38.546 "abort": true, 00:12:38.546 "seek_hole": false, 00:12:38.546 "seek_data": false, 00:12:38.546 "copy": true, 00:12:38.546 "nvme_iov_md": false 00:12:38.546 }, 00:12:38.546 "memory_domains": [ 00:12:38.546 { 00:12:38.546 "dma_device_id": "system", 00:12:38.546 "dma_device_type": 1 00:12:38.546 }, 00:12:38.546 { 00:12:38.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.547 "dma_device_type": 2 00:12:38.547 } 00:12:38.547 ], 00:12:38.547 "driver_specific": { 00:12:38.547 "passthru": { 00:12:38.547 "name": "pt2", 00:12:38.547 "base_bdev_name": "malloc2" 00:12:38.547 } 00:12:38.547 } 00:12:38.547 }' 00:12:38.547 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.547 13:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:38.547 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:38.547 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.806 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:38.806 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:38.806 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.806 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:38.806 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:38.806 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.806 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:38.806 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:38.806 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:38.806 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:12:39.065 [2024-07-26 13:12:19.485232] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 2428de3e-dfce-48bb-8580-03467c223233 '!=' 2428de3e-dfce-48bb-8580-03467c223233 ']' 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 665899 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 665899 ']' 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 665899 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 665899 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 665899' 00:12:39.065 killing process with pid 665899 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 665899 00:12:39.065 [2024-07-26 13:12:19.562792] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.065 [2024-07-26 13:12:19.562842] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.065 [2024-07-26 13:12:19.562880] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.065 [2024-07-26 13:12:19.562891] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1474d10 name raid_bdev1, state offline 00:12:39.065 13:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 665899 00:12:39.065 [2024-07-26 13:12:19.578418] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.325 13:12:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:12:39.325 00:12:39.325 real 0m9.997s 00:12:39.325 user 0m17.843s 00:12:39.325 sys 0m1.887s 00:12:39.325 13:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:39.325 13:12:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.325 ************************************ 00:12:39.325 END TEST raid_superblock_test 00:12:39.325 ************************************ 00:12:39.325 13:12:19 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:12:39.325 13:12:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:39.325 13:12:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.325 13:12:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.325 ************************************ 00:12:39.325 START TEST raid_read_error_test 00:12:39.325 ************************************ 00:12:39.325 13:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:12:39.325 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:12:39.325 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:12:39.325 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.d9mMjfEH8W 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=667719 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 667719 /var/tmp/spdk-raid.sock 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 667719 ']' 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:39.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:39.585 13:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.585 [2024-07-26 13:12:19.925023] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:12:39.585 [2024-07-26 13:12:19.925083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid667719 ] 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:39.585 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:39.585 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:39.585 [2024-07-26 13:12:20.059376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.844 [2024-07-26 13:12:20.144093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.844 [2024-07-26 13:12:20.204874] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.844 [2024-07-26 13:12:20.204903] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.412 13:12:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:40.412 13:12:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:40.412 13:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:40.412 13:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:40.671 BaseBdev1_malloc 00:12:40.671 13:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:40.929 true 00:12:40.929 13:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:41.188 [2024-07-26 13:12:21.553567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:41.188 [2024-07-26 13:12:21.553608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.188 [2024-07-26 13:12:21.553625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2089190 00:12:41.188 [2024-07-26 13:12:21.553637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.188 [2024-07-26 13:12:21.555103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.188 [2024-07-26 13:12:21.555130] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:41.188 BaseBdev1 00:12:41.188 13:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:41.188 13:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:41.446 BaseBdev2_malloc 00:12:41.446 13:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:41.446 true 00:12:41.705 13:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:41.705 [2024-07-26 13:12:22.187683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:41.705 [2024-07-26 13:12:22.187723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.705 [2024-07-26 13:12:22.187740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x208de20 00:12:41.705 [2024-07-26 13:12:22.187751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.705 [2024-07-26 13:12:22.189018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.705 [2024-07-26 13:12:22.189044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:41.705 BaseBdev2 00:12:41.705 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:42.028 [2024-07-26 13:12:22.416314] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.028 [2024-07-26 13:12:22.417392] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.028 [2024-07-26 13:12:22.417546] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x208fa50 00:12:42.028 [2024-07-26 13:12:22.417559] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:42.028 [2024-07-26 13:12:22.417722] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2092770 00:12:42.028 [2024-07-26 13:12:22.417848] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x208fa50 00:12:42.028 [2024-07-26 13:12:22.417857] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x208fa50 00:12:42.028 [2024-07-26 13:12:22.417958] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.028 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:42.028 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:42.028 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:42.028 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:42.028 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:42.028 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:42.028 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:42.028 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:42.028 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:42.028 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:42.028 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:42.028 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.287 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:42.287 "name": "raid_bdev1", 00:12:42.287 "uuid": "1db44991-1ca0-465f-b839-bb4874ac8c94", 00:12:42.287 "strip_size_kb": 64, 00:12:42.287 "state": "online", 00:12:42.287 "raid_level": "concat", 00:12:42.287 "superblock": true, 00:12:42.287 "num_base_bdevs": 2, 00:12:42.287 "num_base_bdevs_discovered": 2, 00:12:42.287 "num_base_bdevs_operational": 2, 00:12:42.287 "base_bdevs_list": [ 00:12:42.287 { 00:12:42.287 "name": "BaseBdev1", 00:12:42.287 "uuid": "e0f54dc2-2caf-5115-824c-13fde77aa8f0", 00:12:42.287 "is_configured": true, 00:12:42.287 "data_offset": 2048, 00:12:42.287 "data_size": 63488 00:12:42.287 }, 00:12:42.287 { 00:12:42.287 "name": "BaseBdev2", 00:12:42.287 "uuid": "052ea23a-8a23-5979-af0e-692f25fd7ea5", 00:12:42.287 "is_configured": true, 00:12:42.287 "data_offset": 2048, 00:12:42.287 "data_size": 63488 00:12:42.287 } 00:12:42.287 ] 00:12:42.287 }' 00:12:42.287 13:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:42.287 13:12:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.857 13:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:12:42.857 13:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:42.857 [2024-07-26 13:12:23.314924] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x208f010 00:12:43.797 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.056 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.316 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:44.316 "name": "raid_bdev1", 00:12:44.316 "uuid": "1db44991-1ca0-465f-b839-bb4874ac8c94", 00:12:44.316 "strip_size_kb": 64, 00:12:44.316 "state": "online", 00:12:44.316 "raid_level": "concat", 00:12:44.316 "superblock": true, 00:12:44.316 "num_base_bdevs": 2, 00:12:44.316 "num_base_bdevs_discovered": 2, 00:12:44.316 "num_base_bdevs_operational": 2, 00:12:44.316 "base_bdevs_list": [ 00:12:44.316 { 00:12:44.316 "name": "BaseBdev1", 00:12:44.316 "uuid": "e0f54dc2-2caf-5115-824c-13fde77aa8f0", 00:12:44.316 "is_configured": true, 00:12:44.316 "data_offset": 2048, 00:12:44.316 "data_size": 63488 00:12:44.316 }, 00:12:44.316 { 00:12:44.316 "name": "BaseBdev2", 00:12:44.316 "uuid": "052ea23a-8a23-5979-af0e-692f25fd7ea5", 00:12:44.316 "is_configured": true, 00:12:44.316 "data_offset": 2048, 00:12:44.316 "data_size": 63488 00:12:44.316 } 00:12:44.316 ] 00:12:44.316 }' 00:12:44.316 13:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:44.316 13:12:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.884 13:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:45.143 [2024-07-26 13:12:25.506076] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.143 [2024-07-26 13:12:25.506110] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.143 [2024-07-26 13:12:25.509101] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.143 [2024-07-26 13:12:25.509135] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.143 [2024-07-26 13:12:25.509167] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.143 [2024-07-26 13:12:25.509177] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x208fa50 name raid_bdev1, state offline 00:12:45.143 0 00:12:45.143 13:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 667719 00:12:45.143 13:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 667719 ']' 00:12:45.144 13:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 667719 00:12:45.144 13:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:45.144 13:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:45.144 13:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 667719 00:12:45.144 13:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:45.144 13:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:45.144 13:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 667719' 00:12:45.144 killing process with pid 667719 00:12:45.144 13:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 667719 00:12:45.144 [2024-07-26 13:12:25.584965] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.144 13:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 667719 00:12:45.144 [2024-07-26 13:12:25.594823] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.403 13:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.d9mMjfEH8W 00:12:45.403 13:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:12:45.403 13:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:12:45.403 13:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:12:45.403 13:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:12:45.403 13:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:45.403 13:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:45.403 13:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:12:45.403 00:12:45.403 real 0m5.954s 00:12:45.403 user 0m9.212s 00:12:45.403 sys 0m1.074s 00:12:45.403 13:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.403 13:12:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.403 ************************************ 00:12:45.403 END TEST raid_read_error_test 00:12:45.403 ************************************ 00:12:45.403 13:12:25 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:12:45.403 13:12:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:45.403 13:12:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.403 13:12:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.403 ************************************ 00:12:45.403 START TEST raid_write_error_test 00:12:45.403 ************************************ 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.KqmafX9ZUP 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=668879 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 668879 /var/tmp/spdk-raid.sock 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 668879 ']' 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:45.403 13:12:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:45.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:45.404 13:12:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:45.404 13:12:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.663 [2024-07-26 13:12:25.958375] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:12:45.663 [2024-07-26 13:12:25.958430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid668879 ] 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:45.663 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:45.663 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:45.663 [2024-07-26 13:12:26.092087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.663 [2024-07-26 13:12:26.175070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.923 [2024-07-26 13:12:26.229877] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.923 [2024-07-26 13:12:26.229907] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.491 13:12:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:46.491 13:12:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:46.491 13:12:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:46.491 13:12:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:46.491 BaseBdev1_malloc 00:12:46.491 13:12:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:46.750 true 00:12:46.750 13:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:47.009 [2024-07-26 13:12:27.401746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:47.009 [2024-07-26 13:12:27.401784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.009 [2024-07-26 13:12:27.401801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25e2190 00:12:47.009 [2024-07-26 13:12:27.401812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.009 [2024-07-26 13:12:27.403255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.009 [2024-07-26 13:12:27.403281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:47.009 BaseBdev1 00:12:47.009 13:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:12:47.009 13:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:47.268 BaseBdev2_malloc 00:12:47.268 13:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:47.528 true 00:12:47.528 13:12:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:47.787 [2024-07-26 13:12:28.083607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:47.787 [2024-07-26 13:12:28.083650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.787 [2024-07-26 13:12:28.083670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25e6e20 00:12:47.787 [2024-07-26 13:12:28.083682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.787 [2024-07-26 13:12:28.085137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.787 [2024-07-26 13:12:28.085176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:47.787 BaseBdev2 00:12:47.787 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:12:47.787 [2024-07-26 13:12:28.312242] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.787 [2024-07-26 13:12:28.313432] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.046 [2024-07-26 13:12:28.313591] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x25e8a50 00:12:48.046 [2024-07-26 13:12:28.313603] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:48.046 [2024-07-26 13:12:28.313795] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25eb770 00:12:48.046 [2024-07-26 13:12:28.313929] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25e8a50 00:12:48.046 [2024-07-26 13:12:28.313943] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25e8a50 00:12:48.046 [2024-07-26 13:12:28.314054] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.046 "name": "raid_bdev1", 00:12:48.046 "uuid": "f8b37974-6110-481f-8a79-940f5a7c8fe3", 00:12:48.046 "strip_size_kb": 64, 00:12:48.046 "state": "online", 00:12:48.046 "raid_level": "concat", 00:12:48.046 "superblock": true, 00:12:48.046 "num_base_bdevs": 2, 00:12:48.046 "num_base_bdevs_discovered": 2, 00:12:48.046 "num_base_bdevs_operational": 2, 00:12:48.046 "base_bdevs_list": [ 00:12:48.046 { 00:12:48.046 "name": "BaseBdev1", 00:12:48.046 "uuid": "b754454c-bcf2-5c48-92b0-f882de689ad6", 00:12:48.046 "is_configured": true, 00:12:48.046 "data_offset": 2048, 00:12:48.046 "data_size": 63488 00:12:48.046 }, 00:12:48.046 { 00:12:48.046 "name": "BaseBdev2", 00:12:48.046 "uuid": "4c51841f-32f1-5c64-8d02-9c80be70071d", 00:12:48.046 "is_configured": true, 00:12:48.046 "data_offset": 2048, 00:12:48.046 "data_size": 63488 00:12:48.046 } 00:12:48.046 ] 00:12:48.046 }' 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.046 13:12:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.614 13:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:12:48.614 13:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:48.874 [2024-07-26 13:12:29.230907] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25e8010 00:12:49.813 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.072 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.332 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:50.332 "name": "raid_bdev1", 00:12:50.332 "uuid": "f8b37974-6110-481f-8a79-940f5a7c8fe3", 00:12:50.332 "strip_size_kb": 64, 00:12:50.332 "state": "online", 00:12:50.332 "raid_level": "concat", 00:12:50.332 "superblock": true, 00:12:50.332 "num_base_bdevs": 2, 00:12:50.332 "num_base_bdevs_discovered": 2, 00:12:50.332 "num_base_bdevs_operational": 2, 00:12:50.332 "base_bdevs_list": [ 00:12:50.332 { 00:12:50.332 "name": "BaseBdev1", 00:12:50.332 "uuid": "b754454c-bcf2-5c48-92b0-f882de689ad6", 00:12:50.332 "is_configured": true, 00:12:50.332 "data_offset": 2048, 00:12:50.332 "data_size": 63488 00:12:50.332 }, 00:12:50.332 { 00:12:50.332 "name": "BaseBdev2", 00:12:50.332 "uuid": "4c51841f-32f1-5c64-8d02-9c80be70071d", 00:12:50.332 "is_configured": true, 00:12:50.332 "data_offset": 2048, 00:12:50.332 "data_size": 63488 00:12:50.332 } 00:12:50.332 ] 00:12:50.332 }' 00:12:50.332 13:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:50.332 13:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.900 13:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:50.900 [2024-07-26 13:12:31.377503] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.900 [2024-07-26 13:12:31.377540] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.900 [2024-07-26 13:12:31.380458] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.900 [2024-07-26 13:12:31.380489] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.900 [2024-07-26 13:12:31.380515] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.900 [2024-07-26 13:12:31.380525] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25e8a50 name raid_bdev1, state offline 00:12:50.900 0 00:12:50.900 13:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 668879 00:12:50.900 13:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 668879 ']' 00:12:50.900 13:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 668879 00:12:50.900 13:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:50.900 13:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.900 13:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 668879 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 668879' 00:12:51.160 killing process with pid 668879 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 668879 00:12:51.160 [2024-07-26 13:12:31.431954] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 668879 00:12:51.160 [2024-07-26 13:12:31.442106] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.KqmafX9ZUP 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:12:51.160 00:12:51.160 real 0m5.766s 00:12:51.160 user 0m8.916s 00:12:51.160 sys 0m1.017s 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:51.160 13:12:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.160 ************************************ 00:12:51.160 END TEST raid_write_error_test 00:12:51.160 ************************************ 00:12:51.420 13:12:31 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:12:51.420 13:12:31 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:12:51.420 13:12:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:51.420 13:12:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:51.420 13:12:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:51.420 ************************************ 00:12:51.420 START TEST raid_state_function_test 00:12:51.420 ************************************ 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:51.420 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=670020 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 670020' 00:12:51.421 Process raid pid: 670020 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 670020 /var/tmp/spdk-raid.sock 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 670020 ']' 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:51.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.421 13:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.421 [2024-07-26 13:12:31.802656] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:12:51.421 [2024-07-26 13:12:31.802715] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:51.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:51.421 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:51.421 [2024-07-26 13:12:31.938658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.681 [2024-07-26 13:12:32.029275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.681 [2024-07-26 13:12:32.083240] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.681 [2024-07-26 13:12:32.083266] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.249 13:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.249 13:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:52.249 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:52.508 [2024-07-26 13:12:32.912863] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.509 [2024-07-26 13:12:32.912906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.509 [2024-07-26 13:12:32.912917] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.509 [2024-07-26 13:12:32.912927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.509 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:52.509 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:52.509 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:52.509 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:52.509 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:52.509 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:52.509 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.509 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.509 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.509 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.509 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.509 13:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.768 13:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.768 "name": "Existed_Raid", 00:12:52.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.768 "strip_size_kb": 0, 00:12:52.768 "state": "configuring", 00:12:52.768 "raid_level": "raid1", 00:12:52.768 "superblock": false, 00:12:52.768 "num_base_bdevs": 2, 00:12:52.768 "num_base_bdevs_discovered": 0, 00:12:52.768 "num_base_bdevs_operational": 2, 00:12:52.768 "base_bdevs_list": [ 00:12:52.768 { 00:12:52.768 "name": "BaseBdev1", 00:12:52.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.768 "is_configured": false, 00:12:52.768 "data_offset": 0, 00:12:52.768 "data_size": 0 00:12:52.768 }, 00:12:52.768 { 00:12:52.768 "name": "BaseBdev2", 00:12:52.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.768 "is_configured": false, 00:12:52.768 "data_offset": 0, 00:12:52.768 "data_size": 0 00:12:52.768 } 00:12:52.768 ] 00:12:52.768 }' 00:12:52.768 13:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.768 13:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.336 13:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:53.595 [2024-07-26 13:12:33.955495] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:53.595 [2024-07-26 13:12:33.955529] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x870f20 name Existed_Raid, state configuring 00:12:53.595 13:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:53.853 [2024-07-26 13:12:34.184096] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:53.853 [2024-07-26 13:12:34.184124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:53.853 [2024-07-26 13:12:34.184133] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.853 [2024-07-26 13:12:34.184151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:53.853 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:54.112 [2024-07-26 13:12:34.410333] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.112 BaseBdev1 00:12:54.112 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:54.112 13:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:54.112 13:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:54.112 13:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:54.112 13:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:54.112 13:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:54.112 13:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:54.371 [ 00:12:54.371 { 00:12:54.371 "name": "BaseBdev1", 00:12:54.371 "aliases": [ 00:12:54.371 "8ce30590-c91c-4763-93ae-ceaaf925924d" 00:12:54.371 ], 00:12:54.371 "product_name": "Malloc disk", 00:12:54.371 "block_size": 512, 00:12:54.371 "num_blocks": 65536, 00:12:54.371 "uuid": "8ce30590-c91c-4763-93ae-ceaaf925924d", 00:12:54.371 "assigned_rate_limits": { 00:12:54.371 "rw_ios_per_sec": 0, 00:12:54.371 "rw_mbytes_per_sec": 0, 00:12:54.371 "r_mbytes_per_sec": 0, 00:12:54.371 "w_mbytes_per_sec": 0 00:12:54.371 }, 00:12:54.371 "claimed": true, 00:12:54.371 "claim_type": "exclusive_write", 00:12:54.371 "zoned": false, 00:12:54.371 "supported_io_types": { 00:12:54.371 "read": true, 00:12:54.371 "write": true, 00:12:54.371 "unmap": true, 00:12:54.371 "flush": true, 00:12:54.371 "reset": true, 00:12:54.371 "nvme_admin": false, 00:12:54.371 "nvme_io": false, 00:12:54.371 "nvme_io_md": false, 00:12:54.371 "write_zeroes": true, 00:12:54.371 "zcopy": true, 00:12:54.371 "get_zone_info": false, 00:12:54.371 "zone_management": false, 00:12:54.371 "zone_append": false, 00:12:54.371 "compare": false, 00:12:54.371 "compare_and_write": false, 00:12:54.371 "abort": true, 00:12:54.371 "seek_hole": false, 00:12:54.371 "seek_data": false, 00:12:54.371 "copy": true, 00:12:54.371 "nvme_iov_md": false 00:12:54.371 }, 00:12:54.371 "memory_domains": [ 00:12:54.371 { 00:12:54.371 "dma_device_id": "system", 00:12:54.371 "dma_device_type": 1 00:12:54.371 }, 00:12:54.371 { 00:12:54.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.371 "dma_device_type": 2 00:12:54.371 } 00:12:54.371 ], 00:12:54.371 "driver_specific": {} 00:12:54.371 } 00:12:54.371 ] 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.371 13:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.660 13:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.660 "name": "Existed_Raid", 00:12:54.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.660 "strip_size_kb": 0, 00:12:54.660 "state": "configuring", 00:12:54.660 "raid_level": "raid1", 00:12:54.660 "superblock": false, 00:12:54.660 "num_base_bdevs": 2, 00:12:54.660 "num_base_bdevs_discovered": 1, 00:12:54.660 "num_base_bdevs_operational": 2, 00:12:54.660 "base_bdevs_list": [ 00:12:54.660 { 00:12:54.660 "name": "BaseBdev1", 00:12:54.660 "uuid": "8ce30590-c91c-4763-93ae-ceaaf925924d", 00:12:54.660 "is_configured": true, 00:12:54.660 "data_offset": 0, 00:12:54.660 "data_size": 65536 00:12:54.660 }, 00:12:54.660 { 00:12:54.660 "name": "BaseBdev2", 00:12:54.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.660 "is_configured": false, 00:12:54.660 "data_offset": 0, 00:12:54.660 "data_size": 0 00:12:54.660 } 00:12:54.660 ] 00:12:54.660 }' 00:12:54.661 13:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.661 13:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.265 13:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:55.536 [2024-07-26 13:12:35.862178] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:55.536 [2024-07-26 13:12:35.862219] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x870810 name Existed_Raid, state configuring 00:12:55.536 13:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:12:55.794 [2024-07-26 13:12:36.086787] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:55.794 [2024-07-26 13:12:36.088196] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:55.794 [2024-07-26 13:12:36.088230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.794 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.052 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.052 "name": "Existed_Raid", 00:12:56.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.052 "strip_size_kb": 0, 00:12:56.052 "state": "configuring", 00:12:56.052 "raid_level": "raid1", 00:12:56.052 "superblock": false, 00:12:56.052 "num_base_bdevs": 2, 00:12:56.052 "num_base_bdevs_discovered": 1, 00:12:56.052 "num_base_bdevs_operational": 2, 00:12:56.052 "base_bdevs_list": [ 00:12:56.052 { 00:12:56.052 "name": "BaseBdev1", 00:12:56.052 "uuid": "8ce30590-c91c-4763-93ae-ceaaf925924d", 00:12:56.052 "is_configured": true, 00:12:56.052 "data_offset": 0, 00:12:56.052 "data_size": 65536 00:12:56.052 }, 00:12:56.052 { 00:12:56.052 "name": "BaseBdev2", 00:12:56.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.052 "is_configured": false, 00:12:56.052 "data_offset": 0, 00:12:56.052 "data_size": 0 00:12:56.052 } 00:12:56.052 ] 00:12:56.052 }' 00:12:56.052 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.052 13:12:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.618 13:12:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.618 [2024-07-26 13:12:37.112787] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.618 [2024-07-26 13:12:37.112826] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x871610 00:12:56.618 [2024-07-26 13:12:37.112836] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:56.618 [2024-07-26 13:12:37.113022] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa15250 00:12:56.618 [2024-07-26 13:12:37.113156] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x871610 00:12:56.618 [2024-07-26 13:12:37.113166] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x871610 00:12:56.618 [2024-07-26 13:12:37.113331] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.618 BaseBdev2 00:12:56.618 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:56.618 13:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:56.618 13:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.618 13:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:56.618 13:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.618 13:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.618 13:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.876 13:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:57.133 [ 00:12:57.133 { 00:12:57.133 "name": "BaseBdev2", 00:12:57.133 "aliases": [ 00:12:57.133 "9e1ddda7-dcba-4922-8db2-69b2c00da816" 00:12:57.133 ], 00:12:57.133 "product_name": "Malloc disk", 00:12:57.133 "block_size": 512, 00:12:57.133 "num_blocks": 65536, 00:12:57.133 "uuid": "9e1ddda7-dcba-4922-8db2-69b2c00da816", 00:12:57.133 "assigned_rate_limits": { 00:12:57.133 "rw_ios_per_sec": 0, 00:12:57.133 "rw_mbytes_per_sec": 0, 00:12:57.133 "r_mbytes_per_sec": 0, 00:12:57.133 "w_mbytes_per_sec": 0 00:12:57.133 }, 00:12:57.133 "claimed": true, 00:12:57.133 "claim_type": "exclusive_write", 00:12:57.133 "zoned": false, 00:12:57.133 "supported_io_types": { 00:12:57.133 "read": true, 00:12:57.133 "write": true, 00:12:57.133 "unmap": true, 00:12:57.133 "flush": true, 00:12:57.133 "reset": true, 00:12:57.133 "nvme_admin": false, 00:12:57.133 "nvme_io": false, 00:12:57.133 "nvme_io_md": false, 00:12:57.133 "write_zeroes": true, 00:12:57.133 "zcopy": true, 00:12:57.133 "get_zone_info": false, 00:12:57.133 "zone_management": false, 00:12:57.133 "zone_append": false, 00:12:57.133 "compare": false, 00:12:57.133 "compare_and_write": false, 00:12:57.133 "abort": true, 00:12:57.133 "seek_hole": false, 00:12:57.133 "seek_data": false, 00:12:57.133 "copy": true, 00:12:57.133 "nvme_iov_md": false 00:12:57.133 }, 00:12:57.133 "memory_domains": [ 00:12:57.133 { 00:12:57.133 "dma_device_id": "system", 00:12:57.133 "dma_device_type": 1 00:12:57.133 }, 00:12:57.133 { 00:12:57.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.133 "dma_device_type": 2 00:12:57.133 } 00:12:57.133 ], 00:12:57.133 "driver_specific": {} 00:12:57.133 } 00:12:57.133 ] 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.133 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.391 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:57.391 "name": "Existed_Raid", 00:12:57.391 "uuid": "5d61d751-a92a-4142-865a-38cebcb36b49", 00:12:57.391 "strip_size_kb": 0, 00:12:57.391 "state": "online", 00:12:57.391 "raid_level": "raid1", 00:12:57.391 "superblock": false, 00:12:57.391 "num_base_bdevs": 2, 00:12:57.391 "num_base_bdevs_discovered": 2, 00:12:57.391 "num_base_bdevs_operational": 2, 00:12:57.391 "base_bdevs_list": [ 00:12:57.391 { 00:12:57.391 "name": "BaseBdev1", 00:12:57.391 "uuid": "8ce30590-c91c-4763-93ae-ceaaf925924d", 00:12:57.391 "is_configured": true, 00:12:57.391 "data_offset": 0, 00:12:57.391 "data_size": 65536 00:12:57.391 }, 00:12:57.391 { 00:12:57.391 "name": "BaseBdev2", 00:12:57.391 "uuid": "9e1ddda7-dcba-4922-8db2-69b2c00da816", 00:12:57.391 "is_configured": true, 00:12:57.391 "data_offset": 0, 00:12:57.391 "data_size": 65536 00:12:57.391 } 00:12:57.391 ] 00:12:57.391 }' 00:12:57.391 13:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:57.391 13:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.957 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.957 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:57.957 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:57.957 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:57.957 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:57.957 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:57.957 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:57.957 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:58.215 [2024-07-26 13:12:38.536782] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.215 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:58.215 "name": "Existed_Raid", 00:12:58.215 "aliases": [ 00:12:58.215 "5d61d751-a92a-4142-865a-38cebcb36b49" 00:12:58.215 ], 00:12:58.215 "product_name": "Raid Volume", 00:12:58.215 "block_size": 512, 00:12:58.215 "num_blocks": 65536, 00:12:58.215 "uuid": "5d61d751-a92a-4142-865a-38cebcb36b49", 00:12:58.215 "assigned_rate_limits": { 00:12:58.215 "rw_ios_per_sec": 0, 00:12:58.215 "rw_mbytes_per_sec": 0, 00:12:58.215 "r_mbytes_per_sec": 0, 00:12:58.215 "w_mbytes_per_sec": 0 00:12:58.215 }, 00:12:58.215 "claimed": false, 00:12:58.215 "zoned": false, 00:12:58.215 "supported_io_types": { 00:12:58.215 "read": true, 00:12:58.215 "write": true, 00:12:58.215 "unmap": false, 00:12:58.215 "flush": false, 00:12:58.215 "reset": true, 00:12:58.215 "nvme_admin": false, 00:12:58.215 "nvme_io": false, 00:12:58.215 "nvme_io_md": false, 00:12:58.215 "write_zeroes": true, 00:12:58.215 "zcopy": false, 00:12:58.215 "get_zone_info": false, 00:12:58.215 "zone_management": false, 00:12:58.215 "zone_append": false, 00:12:58.215 "compare": false, 00:12:58.215 "compare_and_write": false, 00:12:58.215 "abort": false, 00:12:58.215 "seek_hole": false, 00:12:58.215 "seek_data": false, 00:12:58.215 "copy": false, 00:12:58.215 "nvme_iov_md": false 00:12:58.215 }, 00:12:58.215 "memory_domains": [ 00:12:58.215 { 00:12:58.215 "dma_device_id": "system", 00:12:58.215 "dma_device_type": 1 00:12:58.215 }, 00:12:58.215 { 00:12:58.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.215 "dma_device_type": 2 00:12:58.215 }, 00:12:58.215 { 00:12:58.215 "dma_device_id": "system", 00:12:58.215 "dma_device_type": 1 00:12:58.215 }, 00:12:58.215 { 00:12:58.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.215 "dma_device_type": 2 00:12:58.215 } 00:12:58.215 ], 00:12:58.215 "driver_specific": { 00:12:58.215 "raid": { 00:12:58.215 "uuid": "5d61d751-a92a-4142-865a-38cebcb36b49", 00:12:58.215 "strip_size_kb": 0, 00:12:58.215 "state": "online", 00:12:58.215 "raid_level": "raid1", 00:12:58.215 "superblock": false, 00:12:58.215 "num_base_bdevs": 2, 00:12:58.215 "num_base_bdevs_discovered": 2, 00:12:58.215 "num_base_bdevs_operational": 2, 00:12:58.215 "base_bdevs_list": [ 00:12:58.215 { 00:12:58.215 "name": "BaseBdev1", 00:12:58.215 "uuid": "8ce30590-c91c-4763-93ae-ceaaf925924d", 00:12:58.215 "is_configured": true, 00:12:58.215 "data_offset": 0, 00:12:58.215 "data_size": 65536 00:12:58.215 }, 00:12:58.215 { 00:12:58.215 "name": "BaseBdev2", 00:12:58.215 "uuid": "9e1ddda7-dcba-4922-8db2-69b2c00da816", 00:12:58.215 "is_configured": true, 00:12:58.215 "data_offset": 0, 00:12:58.215 "data_size": 65536 00:12:58.215 } 00:12:58.215 ] 00:12:58.215 } 00:12:58.215 } 00:12:58.215 }' 00:12:58.215 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:58.215 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:58.215 BaseBdev2' 00:12:58.215 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.215 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:58.215 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.474 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.474 "name": "BaseBdev1", 00:12:58.474 "aliases": [ 00:12:58.474 "8ce30590-c91c-4763-93ae-ceaaf925924d" 00:12:58.474 ], 00:12:58.474 "product_name": "Malloc disk", 00:12:58.474 "block_size": 512, 00:12:58.474 "num_blocks": 65536, 00:12:58.474 "uuid": "8ce30590-c91c-4763-93ae-ceaaf925924d", 00:12:58.474 "assigned_rate_limits": { 00:12:58.474 "rw_ios_per_sec": 0, 00:12:58.474 "rw_mbytes_per_sec": 0, 00:12:58.474 "r_mbytes_per_sec": 0, 00:12:58.474 "w_mbytes_per_sec": 0 00:12:58.474 }, 00:12:58.474 "claimed": true, 00:12:58.474 "claim_type": "exclusive_write", 00:12:58.474 "zoned": false, 00:12:58.474 "supported_io_types": { 00:12:58.474 "read": true, 00:12:58.474 "write": true, 00:12:58.474 "unmap": true, 00:12:58.474 "flush": true, 00:12:58.474 "reset": true, 00:12:58.474 "nvme_admin": false, 00:12:58.474 "nvme_io": false, 00:12:58.474 "nvme_io_md": false, 00:12:58.474 "write_zeroes": true, 00:12:58.474 "zcopy": true, 00:12:58.474 "get_zone_info": false, 00:12:58.474 "zone_management": false, 00:12:58.474 "zone_append": false, 00:12:58.474 "compare": false, 00:12:58.474 "compare_and_write": false, 00:12:58.474 "abort": true, 00:12:58.474 "seek_hole": false, 00:12:58.474 "seek_data": false, 00:12:58.474 "copy": true, 00:12:58.474 "nvme_iov_md": false 00:12:58.474 }, 00:12:58.474 "memory_domains": [ 00:12:58.474 { 00:12:58.474 "dma_device_id": "system", 00:12:58.474 "dma_device_type": 1 00:12:58.474 }, 00:12:58.474 { 00:12:58.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.474 "dma_device_type": 2 00:12:58.474 } 00:12:58.474 ], 00:12:58.474 "driver_specific": {} 00:12:58.474 }' 00:12:58.474 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.474 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.474 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.474 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.474 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.474 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.474 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.474 13:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.733 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.733 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.733 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.733 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.733 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.733 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.733 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:59.006 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:59.006 "name": "BaseBdev2", 00:12:59.006 "aliases": [ 00:12:59.006 "9e1ddda7-dcba-4922-8db2-69b2c00da816" 00:12:59.006 ], 00:12:59.006 "product_name": "Malloc disk", 00:12:59.006 "block_size": 512, 00:12:59.006 "num_blocks": 65536, 00:12:59.006 "uuid": "9e1ddda7-dcba-4922-8db2-69b2c00da816", 00:12:59.006 "assigned_rate_limits": { 00:12:59.006 "rw_ios_per_sec": 0, 00:12:59.006 "rw_mbytes_per_sec": 0, 00:12:59.006 "r_mbytes_per_sec": 0, 00:12:59.006 "w_mbytes_per_sec": 0 00:12:59.006 }, 00:12:59.006 "claimed": true, 00:12:59.006 "claim_type": "exclusive_write", 00:12:59.006 "zoned": false, 00:12:59.006 "supported_io_types": { 00:12:59.006 "read": true, 00:12:59.006 "write": true, 00:12:59.006 "unmap": true, 00:12:59.006 "flush": true, 00:12:59.006 "reset": true, 00:12:59.006 "nvme_admin": false, 00:12:59.006 "nvme_io": false, 00:12:59.006 "nvme_io_md": false, 00:12:59.006 "write_zeroes": true, 00:12:59.006 "zcopy": true, 00:12:59.006 "get_zone_info": false, 00:12:59.006 "zone_management": false, 00:12:59.006 "zone_append": false, 00:12:59.006 "compare": false, 00:12:59.006 "compare_and_write": false, 00:12:59.006 "abort": true, 00:12:59.006 "seek_hole": false, 00:12:59.006 "seek_data": false, 00:12:59.006 "copy": true, 00:12:59.006 "nvme_iov_md": false 00:12:59.006 }, 00:12:59.006 "memory_domains": [ 00:12:59.006 { 00:12:59.006 "dma_device_id": "system", 00:12:59.006 "dma_device_type": 1 00:12:59.006 }, 00:12:59.006 { 00:12:59.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.006 "dma_device_type": 2 00:12:59.006 } 00:12:59.006 ], 00:12:59.006 "driver_specific": {} 00:12:59.006 }' 00:12:59.007 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:59.007 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:59.007 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:59.007 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:59.007 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:59.007 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:59.007 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:59.007 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:59.273 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:59.273 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:59.273 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:59.273 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:59.273 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:59.533 [2024-07-26 13:12:39.836000] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.533 13:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.792 13:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:59.792 "name": "Existed_Raid", 00:12:59.792 "uuid": "5d61d751-a92a-4142-865a-38cebcb36b49", 00:12:59.792 "strip_size_kb": 0, 00:12:59.792 "state": "online", 00:12:59.792 "raid_level": "raid1", 00:12:59.792 "superblock": false, 00:12:59.792 "num_base_bdevs": 2, 00:12:59.792 "num_base_bdevs_discovered": 1, 00:12:59.792 "num_base_bdevs_operational": 1, 00:12:59.792 "base_bdevs_list": [ 00:12:59.792 { 00:12:59.792 "name": null, 00:12:59.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.792 "is_configured": false, 00:12:59.792 "data_offset": 0, 00:12:59.792 "data_size": 65536 00:12:59.792 }, 00:12:59.792 { 00:12:59.792 "name": "BaseBdev2", 00:12:59.792 "uuid": "9e1ddda7-dcba-4922-8db2-69b2c00da816", 00:12:59.792 "is_configured": true, 00:12:59.792 "data_offset": 0, 00:12:59.792 "data_size": 65536 00:12:59.792 } 00:12:59.792 ] 00:12:59.792 }' 00:12:59.792 13:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:59.792 13:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.361 13:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:00.361 13:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:00.361 13:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.361 13:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:00.361 13:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:00.361 13:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:00.361 13:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:00.621 [2024-07-26 13:12:41.008146] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:00.621 [2024-07-26 13:12:41.008229] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.621 [2024-07-26 13:12:41.018721] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.621 [2024-07-26 13:12:41.018754] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.621 [2024-07-26 13:12:41.018765] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x871610 name Existed_Raid, state offline 00:13:00.621 13:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:00.621 13:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:00.621 13:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.621 13:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 670020 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 670020 ']' 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 670020 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 670020 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 670020' 00:13:00.881 killing process with pid 670020 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 670020 00:13:00.881 [2024-07-26 13:12:41.305838] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.881 13:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 670020 00:13:00.881 [2024-07-26 13:12:41.306694] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:01.141 00:13:01.141 real 0m9.760s 00:13:01.141 user 0m17.359s 00:13:01.141 sys 0m1.780s 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.141 ************************************ 00:13:01.141 END TEST raid_state_function_test 00:13:01.141 ************************************ 00:13:01.141 13:12:41 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:13:01.141 13:12:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:01.141 13:12:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.141 13:12:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:01.141 ************************************ 00:13:01.141 START TEST raid_state_function_test_sb 00:13:01.141 ************************************ 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=671845 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 671845' 00:13:01.141 Process raid pid: 671845 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 671845 /var/tmp/spdk-raid.sock 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 671845 ']' 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:01.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:01.141 13:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.141 [2024-07-26 13:12:41.610332] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:13:01.141 [2024-07-26 13:12:41.610374] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.400 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:01.400 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:01.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:01.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:01.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:01.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:01.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:01.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:01.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:01.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:01.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:01.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:01.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:01.401 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:01.401 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:01.401 [2024-07-26 13:12:41.728365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.401 [2024-07-26 13:12:41.817333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.401 [2024-07-26 13:12:41.874137] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.401 [2024-07-26 13:12:41.874179] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:02.338 [2024-07-26 13:12:42.732201] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:02.338 [2024-07-26 13:12:42.732241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:02.338 [2024-07-26 13:12:42.732252] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:02.338 [2024-07-26 13:12:42.732263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.338 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.597 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:02.597 "name": "Existed_Raid", 00:13:02.597 "uuid": "ac703363-3372-4747-aeb8-aa44d0a4d39a", 00:13:02.597 "strip_size_kb": 0, 00:13:02.597 "state": "configuring", 00:13:02.597 "raid_level": "raid1", 00:13:02.597 "superblock": true, 00:13:02.597 "num_base_bdevs": 2, 00:13:02.597 "num_base_bdevs_discovered": 0, 00:13:02.597 "num_base_bdevs_operational": 2, 00:13:02.597 "base_bdevs_list": [ 00:13:02.597 { 00:13:02.597 "name": "BaseBdev1", 00:13:02.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.597 "is_configured": false, 00:13:02.597 "data_offset": 0, 00:13:02.597 "data_size": 0 00:13:02.597 }, 00:13:02.597 { 00:13:02.597 "name": "BaseBdev2", 00:13:02.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.597 "is_configured": false, 00:13:02.597 "data_offset": 0, 00:13:02.597 "data_size": 0 00:13:02.597 } 00:13:02.597 ] 00:13:02.597 }' 00:13:02.597 13:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:02.597 13:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.165 13:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:03.423 [2024-07-26 13:12:43.750986] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:03.423 [2024-07-26 13:12:43.751016] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb6df20 name Existed_Raid, state configuring 00:13:03.423 13:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:03.681 [2024-07-26 13:12:43.979600] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:03.681 [2024-07-26 13:12:43.979626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:03.681 [2024-07-26 13:12:43.979635] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:03.681 [2024-07-26 13:12:43.979646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:03.681 13:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:03.940 [2024-07-26 13:12:44.209579] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.940 BaseBdev1 00:13:03.940 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:03.940 13:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:03.940 13:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:03.940 13:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:03.940 13:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:03.940 13:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:03.940 13:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:03.940 13:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:04.200 [ 00:13:04.200 { 00:13:04.200 "name": "BaseBdev1", 00:13:04.200 "aliases": [ 00:13:04.200 "6ec49f2c-c50e-4242-a888-4a3a62906528" 00:13:04.200 ], 00:13:04.200 "product_name": "Malloc disk", 00:13:04.200 "block_size": 512, 00:13:04.200 "num_blocks": 65536, 00:13:04.200 "uuid": "6ec49f2c-c50e-4242-a888-4a3a62906528", 00:13:04.200 "assigned_rate_limits": { 00:13:04.200 "rw_ios_per_sec": 0, 00:13:04.200 "rw_mbytes_per_sec": 0, 00:13:04.200 "r_mbytes_per_sec": 0, 00:13:04.200 "w_mbytes_per_sec": 0 00:13:04.200 }, 00:13:04.200 "claimed": true, 00:13:04.200 "claim_type": "exclusive_write", 00:13:04.200 "zoned": false, 00:13:04.200 "supported_io_types": { 00:13:04.200 "read": true, 00:13:04.200 "write": true, 00:13:04.200 "unmap": true, 00:13:04.200 "flush": true, 00:13:04.200 "reset": true, 00:13:04.200 "nvme_admin": false, 00:13:04.200 "nvme_io": false, 00:13:04.200 "nvme_io_md": false, 00:13:04.200 "write_zeroes": true, 00:13:04.200 "zcopy": true, 00:13:04.200 "get_zone_info": false, 00:13:04.200 "zone_management": false, 00:13:04.200 "zone_append": false, 00:13:04.200 "compare": false, 00:13:04.200 "compare_and_write": false, 00:13:04.200 "abort": true, 00:13:04.200 "seek_hole": false, 00:13:04.200 "seek_data": false, 00:13:04.200 "copy": true, 00:13:04.200 "nvme_iov_md": false 00:13:04.200 }, 00:13:04.200 "memory_domains": [ 00:13:04.200 { 00:13:04.200 "dma_device_id": "system", 00:13:04.200 "dma_device_type": 1 00:13:04.200 }, 00:13:04.200 { 00:13:04.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.200 "dma_device_type": 2 00:13:04.200 } 00:13:04.200 ], 00:13:04.200 "driver_specific": {} 00:13:04.200 } 00:13:04.200 ] 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.200 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.459 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.459 "name": "Existed_Raid", 00:13:04.459 "uuid": "1bd60da8-ee59-46dd-8737-b2d39cf6cee3", 00:13:04.459 "strip_size_kb": 0, 00:13:04.459 "state": "configuring", 00:13:04.459 "raid_level": "raid1", 00:13:04.459 "superblock": true, 00:13:04.459 "num_base_bdevs": 2, 00:13:04.459 "num_base_bdevs_discovered": 1, 00:13:04.459 "num_base_bdevs_operational": 2, 00:13:04.459 "base_bdevs_list": [ 00:13:04.459 { 00:13:04.459 "name": "BaseBdev1", 00:13:04.459 "uuid": "6ec49f2c-c50e-4242-a888-4a3a62906528", 00:13:04.459 "is_configured": true, 00:13:04.459 "data_offset": 2048, 00:13:04.459 "data_size": 63488 00:13:04.459 }, 00:13:04.459 { 00:13:04.459 "name": "BaseBdev2", 00:13:04.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.459 "is_configured": false, 00:13:04.459 "data_offset": 0, 00:13:04.459 "data_size": 0 00:13:04.459 } 00:13:04.459 ] 00:13:04.459 }' 00:13:04.459 13:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.459 13:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.028 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:05.287 [2024-07-26 13:12:45.733613] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:05.287 [2024-07-26 13:12:45.733647] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb6d810 name Existed_Raid, state configuring 00:13:05.287 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:05.547 [2024-07-26 13:12:45.962242] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.547 [2024-07-26 13:12:45.963613] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:05.547 [2024-07-26 13:12:45.963644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.547 13:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.806 13:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:05.806 "name": "Existed_Raid", 00:13:05.806 "uuid": "563ef2e0-072d-42af-85e6-f7009be64959", 00:13:05.806 "strip_size_kb": 0, 00:13:05.806 "state": "configuring", 00:13:05.806 "raid_level": "raid1", 00:13:05.806 "superblock": true, 00:13:05.806 "num_base_bdevs": 2, 00:13:05.806 "num_base_bdevs_discovered": 1, 00:13:05.806 "num_base_bdevs_operational": 2, 00:13:05.806 "base_bdevs_list": [ 00:13:05.806 { 00:13:05.806 "name": "BaseBdev1", 00:13:05.806 "uuid": "6ec49f2c-c50e-4242-a888-4a3a62906528", 00:13:05.806 "is_configured": true, 00:13:05.806 "data_offset": 2048, 00:13:05.806 "data_size": 63488 00:13:05.806 }, 00:13:05.806 { 00:13:05.806 "name": "BaseBdev2", 00:13:05.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.806 "is_configured": false, 00:13:05.806 "data_offset": 0, 00:13:05.806 "data_size": 0 00:13:05.806 } 00:13:05.806 ] 00:13:05.806 }' 00:13:05.806 13:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:05.806 13:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.743 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:06.743 [2024-07-26 13:12:47.252837] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.743 [2024-07-26 13:12:47.252972] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xb6e610 00:13:06.743 [2024-07-26 13:12:47.252985] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:06.743 [2024-07-26 13:12:47.253150] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb5a690 00:13:06.743 [2024-07-26 13:12:47.253265] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb6e610 00:13:06.743 [2024-07-26 13:12:47.253275] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb6e610 00:13:06.743 [2024-07-26 13:12:47.253362] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.743 BaseBdev2 00:13:07.001 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:07.001 13:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:07.001 13:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:07.001 13:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:07.001 13:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:07.001 13:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:07.001 13:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:07.001 13:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:07.260 [ 00:13:07.260 { 00:13:07.260 "name": "BaseBdev2", 00:13:07.260 "aliases": [ 00:13:07.260 "ec1beac7-2dde-4d75-93c1-a4ffd6c6d0ae" 00:13:07.260 ], 00:13:07.260 "product_name": "Malloc disk", 00:13:07.260 "block_size": 512, 00:13:07.260 "num_blocks": 65536, 00:13:07.260 "uuid": "ec1beac7-2dde-4d75-93c1-a4ffd6c6d0ae", 00:13:07.260 "assigned_rate_limits": { 00:13:07.260 "rw_ios_per_sec": 0, 00:13:07.260 "rw_mbytes_per_sec": 0, 00:13:07.260 "r_mbytes_per_sec": 0, 00:13:07.260 "w_mbytes_per_sec": 0 00:13:07.260 }, 00:13:07.260 "claimed": true, 00:13:07.260 "claim_type": "exclusive_write", 00:13:07.260 "zoned": false, 00:13:07.260 "supported_io_types": { 00:13:07.260 "read": true, 00:13:07.260 "write": true, 00:13:07.260 "unmap": true, 00:13:07.260 "flush": true, 00:13:07.260 "reset": true, 00:13:07.260 "nvme_admin": false, 00:13:07.260 "nvme_io": false, 00:13:07.260 "nvme_io_md": false, 00:13:07.260 "write_zeroes": true, 00:13:07.260 "zcopy": true, 00:13:07.260 "get_zone_info": false, 00:13:07.260 "zone_management": false, 00:13:07.260 "zone_append": false, 00:13:07.260 "compare": false, 00:13:07.260 "compare_and_write": false, 00:13:07.260 "abort": true, 00:13:07.260 "seek_hole": false, 00:13:07.260 "seek_data": false, 00:13:07.260 "copy": true, 00:13:07.260 "nvme_iov_md": false 00:13:07.260 }, 00:13:07.260 "memory_domains": [ 00:13:07.260 { 00:13:07.260 "dma_device_id": "system", 00:13:07.260 "dma_device_type": 1 00:13:07.260 }, 00:13:07.260 { 00:13:07.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.260 "dma_device_type": 2 00:13:07.260 } 00:13:07.260 ], 00:13:07.260 "driver_specific": {} 00:13:07.260 } 00:13:07.260 ] 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:07.260 "name": "Existed_Raid", 00:13:07.260 "uuid": "563ef2e0-072d-42af-85e6-f7009be64959", 00:13:07.260 "strip_size_kb": 0, 00:13:07.260 "state": "online", 00:13:07.260 "raid_level": "raid1", 00:13:07.260 "superblock": true, 00:13:07.260 "num_base_bdevs": 2, 00:13:07.260 "num_base_bdevs_discovered": 2, 00:13:07.260 "num_base_bdevs_operational": 2, 00:13:07.260 "base_bdevs_list": [ 00:13:07.260 { 00:13:07.260 "name": "BaseBdev1", 00:13:07.260 "uuid": "6ec49f2c-c50e-4242-a888-4a3a62906528", 00:13:07.260 "is_configured": true, 00:13:07.260 "data_offset": 2048, 00:13:07.260 "data_size": 63488 00:13:07.260 }, 00:13:07.260 { 00:13:07.260 "name": "BaseBdev2", 00:13:07.260 "uuid": "ec1beac7-2dde-4d75-93c1-a4ffd6c6d0ae", 00:13:07.260 "is_configured": true, 00:13:07.260 "data_offset": 2048, 00:13:07.260 "data_size": 63488 00:13:07.260 } 00:13:07.260 ] 00:13:07.260 }' 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:07.260 13:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.826 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:07.826 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:07.826 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:07.826 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:07.826 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:07.826 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:07.826 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:07.826 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:08.084 [2024-07-26 13:12:48.516411] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.084 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:08.085 "name": "Existed_Raid", 00:13:08.085 "aliases": [ 00:13:08.085 "563ef2e0-072d-42af-85e6-f7009be64959" 00:13:08.085 ], 00:13:08.085 "product_name": "Raid Volume", 00:13:08.085 "block_size": 512, 00:13:08.085 "num_blocks": 63488, 00:13:08.085 "uuid": "563ef2e0-072d-42af-85e6-f7009be64959", 00:13:08.085 "assigned_rate_limits": { 00:13:08.085 "rw_ios_per_sec": 0, 00:13:08.085 "rw_mbytes_per_sec": 0, 00:13:08.085 "r_mbytes_per_sec": 0, 00:13:08.085 "w_mbytes_per_sec": 0 00:13:08.085 }, 00:13:08.085 "claimed": false, 00:13:08.085 "zoned": false, 00:13:08.085 "supported_io_types": { 00:13:08.085 "read": true, 00:13:08.085 "write": true, 00:13:08.085 "unmap": false, 00:13:08.085 "flush": false, 00:13:08.085 "reset": true, 00:13:08.085 "nvme_admin": false, 00:13:08.085 "nvme_io": false, 00:13:08.085 "nvme_io_md": false, 00:13:08.085 "write_zeroes": true, 00:13:08.085 "zcopy": false, 00:13:08.085 "get_zone_info": false, 00:13:08.085 "zone_management": false, 00:13:08.085 "zone_append": false, 00:13:08.085 "compare": false, 00:13:08.085 "compare_and_write": false, 00:13:08.085 "abort": false, 00:13:08.085 "seek_hole": false, 00:13:08.085 "seek_data": false, 00:13:08.085 "copy": false, 00:13:08.085 "nvme_iov_md": false 00:13:08.085 }, 00:13:08.085 "memory_domains": [ 00:13:08.085 { 00:13:08.085 "dma_device_id": "system", 00:13:08.085 "dma_device_type": 1 00:13:08.085 }, 00:13:08.085 { 00:13:08.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.085 "dma_device_type": 2 00:13:08.085 }, 00:13:08.085 { 00:13:08.085 "dma_device_id": "system", 00:13:08.085 "dma_device_type": 1 00:13:08.085 }, 00:13:08.085 { 00:13:08.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.085 "dma_device_type": 2 00:13:08.085 } 00:13:08.085 ], 00:13:08.085 "driver_specific": { 00:13:08.085 "raid": { 00:13:08.085 "uuid": "563ef2e0-072d-42af-85e6-f7009be64959", 00:13:08.085 "strip_size_kb": 0, 00:13:08.085 "state": "online", 00:13:08.085 "raid_level": "raid1", 00:13:08.085 "superblock": true, 00:13:08.085 "num_base_bdevs": 2, 00:13:08.085 "num_base_bdevs_discovered": 2, 00:13:08.085 "num_base_bdevs_operational": 2, 00:13:08.085 "base_bdevs_list": [ 00:13:08.085 { 00:13:08.085 "name": "BaseBdev1", 00:13:08.085 "uuid": "6ec49f2c-c50e-4242-a888-4a3a62906528", 00:13:08.085 "is_configured": true, 00:13:08.085 "data_offset": 2048, 00:13:08.085 "data_size": 63488 00:13:08.085 }, 00:13:08.085 { 00:13:08.085 "name": "BaseBdev2", 00:13:08.085 "uuid": "ec1beac7-2dde-4d75-93c1-a4ffd6c6d0ae", 00:13:08.085 "is_configured": true, 00:13:08.085 "data_offset": 2048, 00:13:08.085 "data_size": 63488 00:13:08.085 } 00:13:08.085 ] 00:13:08.085 } 00:13:08.085 } 00:13:08.085 }' 00:13:08.085 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:08.085 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:08.085 BaseBdev2' 00:13:08.085 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:08.085 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:08.085 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:08.343 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:08.343 "name": "BaseBdev1", 00:13:08.343 "aliases": [ 00:13:08.343 "6ec49f2c-c50e-4242-a888-4a3a62906528" 00:13:08.343 ], 00:13:08.343 "product_name": "Malloc disk", 00:13:08.343 "block_size": 512, 00:13:08.343 "num_blocks": 65536, 00:13:08.343 "uuid": "6ec49f2c-c50e-4242-a888-4a3a62906528", 00:13:08.343 "assigned_rate_limits": { 00:13:08.343 "rw_ios_per_sec": 0, 00:13:08.343 "rw_mbytes_per_sec": 0, 00:13:08.343 "r_mbytes_per_sec": 0, 00:13:08.343 "w_mbytes_per_sec": 0 00:13:08.343 }, 00:13:08.343 "claimed": true, 00:13:08.343 "claim_type": "exclusive_write", 00:13:08.343 "zoned": false, 00:13:08.343 "supported_io_types": { 00:13:08.343 "read": true, 00:13:08.343 "write": true, 00:13:08.343 "unmap": true, 00:13:08.343 "flush": true, 00:13:08.343 "reset": true, 00:13:08.343 "nvme_admin": false, 00:13:08.343 "nvme_io": false, 00:13:08.343 "nvme_io_md": false, 00:13:08.343 "write_zeroes": true, 00:13:08.343 "zcopy": true, 00:13:08.343 "get_zone_info": false, 00:13:08.343 "zone_management": false, 00:13:08.343 "zone_append": false, 00:13:08.343 "compare": false, 00:13:08.343 "compare_and_write": false, 00:13:08.343 "abort": true, 00:13:08.343 "seek_hole": false, 00:13:08.343 "seek_data": false, 00:13:08.343 "copy": true, 00:13:08.343 "nvme_iov_md": false 00:13:08.343 }, 00:13:08.343 "memory_domains": [ 00:13:08.343 { 00:13:08.343 "dma_device_id": "system", 00:13:08.343 "dma_device_type": 1 00:13:08.343 }, 00:13:08.343 { 00:13:08.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.343 "dma_device_type": 2 00:13:08.343 } 00:13:08.343 ], 00:13:08.343 "driver_specific": {} 00:13:08.343 }' 00:13:08.343 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:08.343 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:08.650 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:08.650 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:08.650 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:08.651 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:08.651 13:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:08.651 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:08.651 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:08.651 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:08.651 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:08.651 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:08.651 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:08.651 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:08.651 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:08.909 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:08.909 "name": "BaseBdev2", 00:13:08.909 "aliases": [ 00:13:08.909 "ec1beac7-2dde-4d75-93c1-a4ffd6c6d0ae" 00:13:08.909 ], 00:13:08.909 "product_name": "Malloc disk", 00:13:08.909 "block_size": 512, 00:13:08.909 "num_blocks": 65536, 00:13:08.909 "uuid": "ec1beac7-2dde-4d75-93c1-a4ffd6c6d0ae", 00:13:08.909 "assigned_rate_limits": { 00:13:08.909 "rw_ios_per_sec": 0, 00:13:08.909 "rw_mbytes_per_sec": 0, 00:13:08.909 "r_mbytes_per_sec": 0, 00:13:08.909 "w_mbytes_per_sec": 0 00:13:08.909 }, 00:13:08.909 "claimed": true, 00:13:08.909 "claim_type": "exclusive_write", 00:13:08.909 "zoned": false, 00:13:08.909 "supported_io_types": { 00:13:08.909 "read": true, 00:13:08.909 "write": true, 00:13:08.909 "unmap": true, 00:13:08.909 "flush": true, 00:13:08.909 "reset": true, 00:13:08.909 "nvme_admin": false, 00:13:08.909 "nvme_io": false, 00:13:08.909 "nvme_io_md": false, 00:13:08.909 "write_zeroes": true, 00:13:08.909 "zcopy": true, 00:13:08.909 "get_zone_info": false, 00:13:08.909 "zone_management": false, 00:13:08.909 "zone_append": false, 00:13:08.909 "compare": false, 00:13:08.909 "compare_and_write": false, 00:13:08.909 "abort": true, 00:13:08.909 "seek_hole": false, 00:13:08.909 "seek_data": false, 00:13:08.909 "copy": true, 00:13:08.909 "nvme_iov_md": false 00:13:08.909 }, 00:13:08.909 "memory_domains": [ 00:13:08.909 { 00:13:08.909 "dma_device_id": "system", 00:13:08.909 "dma_device_type": 1 00:13:08.909 }, 00:13:08.909 { 00:13:08.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.909 "dma_device_type": 2 00:13:08.909 } 00:13:08.909 ], 00:13:08.909 "driver_specific": {} 00:13:08.909 }' 00:13:08.909 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:08.909 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:09.167 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:09.167 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:09.167 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:09.167 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:09.167 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:09.167 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:09.167 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:09.167 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:09.167 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:09.426 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:09.426 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:09.426 [2024-07-26 13:12:49.944162] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.684 13:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.684 13:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.684 "name": "Existed_Raid", 00:13:09.684 "uuid": "563ef2e0-072d-42af-85e6-f7009be64959", 00:13:09.684 "strip_size_kb": 0, 00:13:09.684 "state": "online", 00:13:09.684 "raid_level": "raid1", 00:13:09.684 "superblock": true, 00:13:09.684 "num_base_bdevs": 2, 00:13:09.684 "num_base_bdevs_discovered": 1, 00:13:09.684 "num_base_bdevs_operational": 1, 00:13:09.684 "base_bdevs_list": [ 00:13:09.684 { 00:13:09.684 "name": null, 00:13:09.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.684 "is_configured": false, 00:13:09.684 "data_offset": 2048, 00:13:09.684 "data_size": 63488 00:13:09.684 }, 00:13:09.684 { 00:13:09.684 "name": "BaseBdev2", 00:13:09.684 "uuid": "ec1beac7-2dde-4d75-93c1-a4ffd6c6d0ae", 00:13:09.684 "is_configured": true, 00:13:09.684 "data_offset": 2048, 00:13:09.684 "data_size": 63488 00:13:09.684 } 00:13:09.684 ] 00:13:09.684 }' 00:13:09.684 13:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.684 13:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.618 13:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:10.618 13:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:10.618 13:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.618 13:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:10.618 13:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:10.618 13:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:10.618 13:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:10.877 [2024-07-26 13:12:51.220559] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:10.877 [2024-07-26 13:12:51.220638] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.877 [2024-07-26 13:12:51.230991] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.877 [2024-07-26 13:12:51.231021] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.877 [2024-07-26 13:12:51.231031] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb6e610 name Existed_Raid, state offline 00:13:10.877 13:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:10.877 13:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:10.877 13:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.877 13:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 671845 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 671845 ']' 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 671845 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 671845 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 671845' 00:13:11.136 killing process with pid 671845 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 671845 00:13:11.136 [2024-07-26 13:12:51.535899] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.136 13:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 671845 00:13:11.136 [2024-07-26 13:12:51.536749] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.395 13:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:11.395 00:13:11.395 real 0m10.155s 00:13:11.395 user 0m18.076s 00:13:11.395 sys 0m1.833s 00:13:11.395 13:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.395 13:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.395 ************************************ 00:13:11.395 END TEST raid_state_function_test_sb 00:13:11.395 ************************************ 00:13:11.395 13:12:51 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:13:11.395 13:12:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:11.395 13:12:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.395 13:12:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.395 ************************************ 00:13:11.395 START TEST raid_superblock_test 00:13:11.395 ************************************ 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=673918 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 673918 /var/tmp/spdk-raid.sock 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 673918 ']' 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:11.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:11.395 13:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.395 [2024-07-26 13:12:51.874702] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:13:11.395 [2024-07-26 13:12:51.874760] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673918 ] 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:11.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:11.654 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:11.654 [2024-07-26 13:12:52.005803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.655 [2024-07-26 13:12:52.092218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.655 [2024-07-26 13:12:52.152672] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.655 [2024-07-26 13:12:52.152714] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.590 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:12.590 13:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:12.590 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:13:12.590 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:12.590 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:13:12.590 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:13:12.590 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:12.590 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:12.590 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:13:12.590 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:12.590 13:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:12.590 malloc1 00:13:12.590 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:12.848 [2024-07-26 13:12:53.218592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:12.848 [2024-07-26 13:12:53.218636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.849 [2024-07-26 13:12:53.218656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18292f0 00:13:12.849 [2024-07-26 13:12:53.218667] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.849 [2024-07-26 13:12:53.220250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.849 [2024-07-26 13:12:53.220278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:12.849 pt1 00:13:12.849 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:13:12.849 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:12.849 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:13:12.849 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:13:12.849 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:12.849 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:12.849 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:13:12.849 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:12.849 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:13.107 malloc2 00:13:13.107 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:13.365 [2024-07-26 13:12:53.684377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:13.365 [2024-07-26 13:12:53.684417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.365 [2024-07-26 13:12:53.684433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x182a6d0 00:13:13.365 [2024-07-26 13:12:53.684444] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.365 [2024-07-26 13:12:53.685885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.365 [2024-07-26 13:12:53.685913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:13.365 pt2 00:13:13.365 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:13:13.365 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:13:13.365 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:13:13.624 [2024-07-26 13:12:53.912992] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:13.624 [2024-07-26 13:12:53.914154] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.624 [2024-07-26 13:12:53.914282] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x19c3310 00:13:13.624 [2024-07-26 13:12:53.914294] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:13.624 [2024-07-26 13:12:53.914479] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1822550 00:13:13.624 [2024-07-26 13:12:53.914609] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19c3310 00:13:13.624 [2024-07-26 13:12:53.914618] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19c3310 00:13:13.624 [2024-07-26 13:12:53.914722] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.624 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:13.624 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:13.624 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:13.624 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:13.624 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:13.624 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:13.624 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:13.624 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:13.624 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:13.624 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:13.624 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.624 13:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.624 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.624 "name": "raid_bdev1", 00:13:13.624 "uuid": "dbba3b0b-bdea-4986-bd78-a73d2fc0f59e", 00:13:13.624 "strip_size_kb": 0, 00:13:13.624 "state": "online", 00:13:13.624 "raid_level": "raid1", 00:13:13.624 "superblock": true, 00:13:13.624 "num_base_bdevs": 2, 00:13:13.624 "num_base_bdevs_discovered": 2, 00:13:13.624 "num_base_bdevs_operational": 2, 00:13:13.624 "base_bdevs_list": [ 00:13:13.624 { 00:13:13.624 "name": "pt1", 00:13:13.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:13.624 "is_configured": true, 00:13:13.624 "data_offset": 2048, 00:13:13.624 "data_size": 63488 00:13:13.624 }, 00:13:13.624 { 00:13:13.624 "name": "pt2", 00:13:13.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.624 "is_configured": true, 00:13:13.624 "data_offset": 2048, 00:13:13.624 "data_size": 63488 00:13:13.624 } 00:13:13.624 ] 00:13:13.624 }' 00:13:13.624 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.624 13:12:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.559 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:13:14.559 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:14.559 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:14.559 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:14.559 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:14.559 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:14.559 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:14.559 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:14.559 [2024-07-26 13:12:54.932096] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:14.559 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:14.559 "name": "raid_bdev1", 00:13:14.559 "aliases": [ 00:13:14.559 "dbba3b0b-bdea-4986-bd78-a73d2fc0f59e" 00:13:14.559 ], 00:13:14.559 "product_name": "Raid Volume", 00:13:14.559 "block_size": 512, 00:13:14.559 "num_blocks": 63488, 00:13:14.559 "uuid": "dbba3b0b-bdea-4986-bd78-a73d2fc0f59e", 00:13:14.559 "assigned_rate_limits": { 00:13:14.559 "rw_ios_per_sec": 0, 00:13:14.559 "rw_mbytes_per_sec": 0, 00:13:14.559 "r_mbytes_per_sec": 0, 00:13:14.559 "w_mbytes_per_sec": 0 00:13:14.559 }, 00:13:14.559 "claimed": false, 00:13:14.559 "zoned": false, 00:13:14.559 "supported_io_types": { 00:13:14.559 "read": true, 00:13:14.559 "write": true, 00:13:14.559 "unmap": false, 00:13:14.559 "flush": false, 00:13:14.559 "reset": true, 00:13:14.559 "nvme_admin": false, 00:13:14.559 "nvme_io": false, 00:13:14.559 "nvme_io_md": false, 00:13:14.559 "write_zeroes": true, 00:13:14.559 "zcopy": false, 00:13:14.559 "get_zone_info": false, 00:13:14.559 "zone_management": false, 00:13:14.559 "zone_append": false, 00:13:14.559 "compare": false, 00:13:14.559 "compare_and_write": false, 00:13:14.559 "abort": false, 00:13:14.559 "seek_hole": false, 00:13:14.559 "seek_data": false, 00:13:14.559 "copy": false, 00:13:14.559 "nvme_iov_md": false 00:13:14.559 }, 00:13:14.559 "memory_domains": [ 00:13:14.559 { 00:13:14.559 "dma_device_id": "system", 00:13:14.559 "dma_device_type": 1 00:13:14.559 }, 00:13:14.559 { 00:13:14.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.559 "dma_device_type": 2 00:13:14.560 }, 00:13:14.560 { 00:13:14.560 "dma_device_id": "system", 00:13:14.560 "dma_device_type": 1 00:13:14.560 }, 00:13:14.560 { 00:13:14.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.560 "dma_device_type": 2 00:13:14.560 } 00:13:14.560 ], 00:13:14.560 "driver_specific": { 00:13:14.560 "raid": { 00:13:14.560 "uuid": "dbba3b0b-bdea-4986-bd78-a73d2fc0f59e", 00:13:14.560 "strip_size_kb": 0, 00:13:14.560 "state": "online", 00:13:14.560 "raid_level": "raid1", 00:13:14.560 "superblock": true, 00:13:14.560 "num_base_bdevs": 2, 00:13:14.560 "num_base_bdevs_discovered": 2, 00:13:14.560 "num_base_bdevs_operational": 2, 00:13:14.560 "base_bdevs_list": [ 00:13:14.560 { 00:13:14.560 "name": "pt1", 00:13:14.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:14.560 "is_configured": true, 00:13:14.560 "data_offset": 2048, 00:13:14.560 "data_size": 63488 00:13:14.560 }, 00:13:14.560 { 00:13:14.560 "name": "pt2", 00:13:14.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.560 "is_configured": true, 00:13:14.560 "data_offset": 2048, 00:13:14.560 "data_size": 63488 00:13:14.560 } 00:13:14.560 ] 00:13:14.560 } 00:13:14.560 } 00:13:14.560 }' 00:13:14.560 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:14.560 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:14.560 pt2' 00:13:14.560 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:14.560 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:14.560 13:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:14.818 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:14.818 "name": "pt1", 00:13:14.818 "aliases": [ 00:13:14.818 "00000000-0000-0000-0000-000000000001" 00:13:14.818 ], 00:13:14.818 "product_name": "passthru", 00:13:14.818 "block_size": 512, 00:13:14.818 "num_blocks": 65536, 00:13:14.818 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:14.818 "assigned_rate_limits": { 00:13:14.818 "rw_ios_per_sec": 0, 00:13:14.818 "rw_mbytes_per_sec": 0, 00:13:14.818 "r_mbytes_per_sec": 0, 00:13:14.818 "w_mbytes_per_sec": 0 00:13:14.818 }, 00:13:14.818 "claimed": true, 00:13:14.818 "claim_type": "exclusive_write", 00:13:14.818 "zoned": false, 00:13:14.818 "supported_io_types": { 00:13:14.818 "read": true, 00:13:14.818 "write": true, 00:13:14.818 "unmap": true, 00:13:14.818 "flush": true, 00:13:14.818 "reset": true, 00:13:14.818 "nvme_admin": false, 00:13:14.818 "nvme_io": false, 00:13:14.818 "nvme_io_md": false, 00:13:14.818 "write_zeroes": true, 00:13:14.818 "zcopy": true, 00:13:14.818 "get_zone_info": false, 00:13:14.818 "zone_management": false, 00:13:14.818 "zone_append": false, 00:13:14.818 "compare": false, 00:13:14.818 "compare_and_write": false, 00:13:14.818 "abort": true, 00:13:14.818 "seek_hole": false, 00:13:14.818 "seek_data": false, 00:13:14.818 "copy": true, 00:13:14.818 "nvme_iov_md": false 00:13:14.818 }, 00:13:14.818 "memory_domains": [ 00:13:14.818 { 00:13:14.818 "dma_device_id": "system", 00:13:14.818 "dma_device_type": 1 00:13:14.818 }, 00:13:14.818 { 00:13:14.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.818 "dma_device_type": 2 00:13:14.818 } 00:13:14.818 ], 00:13:14.818 "driver_specific": { 00:13:14.818 "passthru": { 00:13:14.818 "name": "pt1", 00:13:14.818 "base_bdev_name": "malloc1" 00:13:14.818 } 00:13:14.818 } 00:13:14.818 }' 00:13:14.818 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:14.818 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:14.818 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:14.818 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:15.077 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:15.077 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:15.077 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:15.077 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:15.077 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:15.077 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:15.077 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:15.077 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:15.077 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:15.077 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:15.077 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:15.335 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:15.335 "name": "pt2", 00:13:15.335 "aliases": [ 00:13:15.335 "00000000-0000-0000-0000-000000000002" 00:13:15.335 ], 00:13:15.335 "product_name": "passthru", 00:13:15.335 "block_size": 512, 00:13:15.335 "num_blocks": 65536, 00:13:15.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:15.335 "assigned_rate_limits": { 00:13:15.335 "rw_ios_per_sec": 0, 00:13:15.335 "rw_mbytes_per_sec": 0, 00:13:15.335 "r_mbytes_per_sec": 0, 00:13:15.335 "w_mbytes_per_sec": 0 00:13:15.335 }, 00:13:15.335 "claimed": true, 00:13:15.335 "claim_type": "exclusive_write", 00:13:15.335 "zoned": false, 00:13:15.335 "supported_io_types": { 00:13:15.335 "read": true, 00:13:15.335 "write": true, 00:13:15.335 "unmap": true, 00:13:15.335 "flush": true, 00:13:15.335 "reset": true, 00:13:15.335 "nvme_admin": false, 00:13:15.335 "nvme_io": false, 00:13:15.335 "nvme_io_md": false, 00:13:15.335 "write_zeroes": true, 00:13:15.335 "zcopy": true, 00:13:15.335 "get_zone_info": false, 00:13:15.335 "zone_management": false, 00:13:15.335 "zone_append": false, 00:13:15.335 "compare": false, 00:13:15.335 "compare_and_write": false, 00:13:15.335 "abort": true, 00:13:15.335 "seek_hole": false, 00:13:15.335 "seek_data": false, 00:13:15.335 "copy": true, 00:13:15.335 "nvme_iov_md": false 00:13:15.335 }, 00:13:15.335 "memory_domains": [ 00:13:15.335 { 00:13:15.335 "dma_device_id": "system", 00:13:15.335 "dma_device_type": 1 00:13:15.335 }, 00:13:15.335 { 00:13:15.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.335 "dma_device_type": 2 00:13:15.335 } 00:13:15.335 ], 00:13:15.335 "driver_specific": { 00:13:15.335 "passthru": { 00:13:15.335 "name": "pt2", 00:13:15.335 "base_bdev_name": "malloc2" 00:13:15.335 } 00:13:15.335 } 00:13:15.335 }' 00:13:15.335 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:15.335 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:15.592 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:15.592 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:15.592 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:15.592 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:15.592 13:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:15.592 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:15.592 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:15.592 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:15.592 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:15.850 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:15.850 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:13:15.850 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:15.850 [2024-07-26 13:12:56.343803] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.850 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=dbba3b0b-bdea-4986-bd78-a73d2fc0f59e 00:13:15.850 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z dbba3b0b-bdea-4986-bd78-a73d2fc0f59e ']' 00:13:15.850 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:16.108 [2024-07-26 13:12:56.572175] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:16.108 [2024-07-26 13:12:56.572191] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.108 [2024-07-26 13:12:56.572242] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.108 [2024-07-26 13:12:56.572295] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.108 [2024-07-26 13:12:56.572306] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19c3310 name raid_bdev1, state offline 00:13:16.108 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:16.108 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:13:16.366 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:13:16.366 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:13:16.366 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:13:16.366 13:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:16.625 13:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:13:16.625 13:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:16.884 13:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:16.884 13:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:17.143 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:13:17.402 [2024-07-26 13:12:57.687061] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:17.402 [2024-07-26 13:12:57.688311] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:17.402 [2024-07-26 13:12:57.688364] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:17.402 [2024-07-26 13:12:57.688401] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:17.402 [2024-07-26 13:12:57.688419] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:17.402 [2024-07-26 13:12:57.688427] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x181fec0 name raid_bdev1, state configuring 00:13:17.402 request: 00:13:17.402 { 00:13:17.402 "name": "raid_bdev1", 00:13:17.402 "raid_level": "raid1", 00:13:17.402 "base_bdevs": [ 00:13:17.402 "malloc1", 00:13:17.402 "malloc2" 00:13:17.402 ], 00:13:17.402 "superblock": false, 00:13:17.402 "method": "bdev_raid_create", 00:13:17.402 "req_id": 1 00:13:17.402 } 00:13:17.402 Got JSON-RPC error response 00:13:17.402 response: 00:13:17.402 { 00:13:17.402 "code": -17, 00:13:17.402 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:17.402 } 00:13:17.402 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:17.402 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:17.402 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:17.402 13:12:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:17.402 13:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.402 13:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:13:17.661 13:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:13:17.661 13:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:13:17.661 13:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:17.661 [2024-07-26 13:12:58.144229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:17.661 [2024-07-26 13:12:58.144273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.661 [2024-07-26 13:12:58.144291] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19ccd70 00:13:17.661 [2024-07-26 13:12:58.144302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.661 [2024-07-26 13:12:58.145765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.661 [2024-07-26 13:12:58.145791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:17.661 [2024-07-26 13:12:58.145850] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:17.661 [2024-07-26 13:12:58.145874] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:17.661 pt1 00:13:17.661 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:17.661 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:17.661 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:17.661 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:17.661 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:17.661 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:17.661 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.661 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.661 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.661 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.661 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.661 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.920 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.920 "name": "raid_bdev1", 00:13:17.920 "uuid": "dbba3b0b-bdea-4986-bd78-a73d2fc0f59e", 00:13:17.920 "strip_size_kb": 0, 00:13:17.920 "state": "configuring", 00:13:17.920 "raid_level": "raid1", 00:13:17.920 "superblock": true, 00:13:17.920 "num_base_bdevs": 2, 00:13:17.920 "num_base_bdevs_discovered": 1, 00:13:17.920 "num_base_bdevs_operational": 2, 00:13:17.920 "base_bdevs_list": [ 00:13:17.920 { 00:13:17.920 "name": "pt1", 00:13:17.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:17.920 "is_configured": true, 00:13:17.920 "data_offset": 2048, 00:13:17.920 "data_size": 63488 00:13:17.920 }, 00:13:17.920 { 00:13:17.920 "name": null, 00:13:17.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:17.920 "is_configured": false, 00:13:17.920 "data_offset": 2048, 00:13:17.920 "data_size": 63488 00:13:17.920 } 00:13:17.920 ] 00:13:17.920 }' 00:13:17.920 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.920 13:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.484 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:13:18.484 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:13:18.484 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:13:18.484 13:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:18.743 [2024-07-26 13:12:59.166927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:18.743 [2024-07-26 13:12:59.166968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.743 [2024-07-26 13:12:59.166986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19cc3f0 00:13:18.743 [2024-07-26 13:12:59.166997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.743 [2024-07-26 13:12:59.167309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.743 [2024-07-26 13:12:59.167326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:18.743 [2024-07-26 13:12:59.167379] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:18.743 [2024-07-26 13:12:59.167396] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:18.743 [2024-07-26 13:12:59.167482] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x19c3b70 00:13:18.743 [2024-07-26 13:12:59.167492] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:18.743 [2024-07-26 13:12:59.167643] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1820c60 00:13:18.743 [2024-07-26 13:12:59.167764] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19c3b70 00:13:18.743 [2024-07-26 13:12:59.167774] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19c3b70 00:13:18.743 [2024-07-26 13:12:59.167865] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.743 pt2 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.743 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.000 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:19.000 "name": "raid_bdev1", 00:13:19.000 "uuid": "dbba3b0b-bdea-4986-bd78-a73d2fc0f59e", 00:13:19.000 "strip_size_kb": 0, 00:13:19.000 "state": "online", 00:13:19.000 "raid_level": "raid1", 00:13:19.000 "superblock": true, 00:13:19.000 "num_base_bdevs": 2, 00:13:19.000 "num_base_bdevs_discovered": 2, 00:13:19.000 "num_base_bdevs_operational": 2, 00:13:19.000 "base_bdevs_list": [ 00:13:19.000 { 00:13:19.000 "name": "pt1", 00:13:19.000 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.000 "is_configured": true, 00:13:19.000 "data_offset": 2048, 00:13:19.000 "data_size": 63488 00:13:19.000 }, 00:13:19.000 { 00:13:19.000 "name": "pt2", 00:13:19.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.000 "is_configured": true, 00:13:19.000 "data_offset": 2048, 00:13:19.000 "data_size": 63488 00:13:19.000 } 00:13:19.000 ] 00:13:19.000 }' 00:13:19.000 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:19.000 13:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.564 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:13:19.564 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:19.564 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:19.564 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:19.564 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:19.564 13:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:19.564 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:19.564 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:19.822 [2024-07-26 13:13:00.213944] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:19.822 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:19.822 "name": "raid_bdev1", 00:13:19.822 "aliases": [ 00:13:19.822 "dbba3b0b-bdea-4986-bd78-a73d2fc0f59e" 00:13:19.822 ], 00:13:19.822 "product_name": "Raid Volume", 00:13:19.822 "block_size": 512, 00:13:19.822 "num_blocks": 63488, 00:13:19.822 "uuid": "dbba3b0b-bdea-4986-bd78-a73d2fc0f59e", 00:13:19.822 "assigned_rate_limits": { 00:13:19.822 "rw_ios_per_sec": 0, 00:13:19.822 "rw_mbytes_per_sec": 0, 00:13:19.822 "r_mbytes_per_sec": 0, 00:13:19.822 "w_mbytes_per_sec": 0 00:13:19.822 }, 00:13:19.822 "claimed": false, 00:13:19.822 "zoned": false, 00:13:19.822 "supported_io_types": { 00:13:19.822 "read": true, 00:13:19.822 "write": true, 00:13:19.822 "unmap": false, 00:13:19.822 "flush": false, 00:13:19.822 "reset": true, 00:13:19.822 "nvme_admin": false, 00:13:19.822 "nvme_io": false, 00:13:19.822 "nvme_io_md": false, 00:13:19.822 "write_zeroes": true, 00:13:19.822 "zcopy": false, 00:13:19.822 "get_zone_info": false, 00:13:19.822 "zone_management": false, 00:13:19.822 "zone_append": false, 00:13:19.822 "compare": false, 00:13:19.822 "compare_and_write": false, 00:13:19.822 "abort": false, 00:13:19.822 "seek_hole": false, 00:13:19.822 "seek_data": false, 00:13:19.822 "copy": false, 00:13:19.822 "nvme_iov_md": false 00:13:19.822 }, 00:13:19.822 "memory_domains": [ 00:13:19.822 { 00:13:19.822 "dma_device_id": "system", 00:13:19.822 "dma_device_type": 1 00:13:19.822 }, 00:13:19.822 { 00:13:19.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.822 "dma_device_type": 2 00:13:19.822 }, 00:13:19.822 { 00:13:19.822 "dma_device_id": "system", 00:13:19.822 "dma_device_type": 1 00:13:19.822 }, 00:13:19.822 { 00:13:19.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.822 "dma_device_type": 2 00:13:19.822 } 00:13:19.822 ], 00:13:19.822 "driver_specific": { 00:13:19.822 "raid": { 00:13:19.822 "uuid": "dbba3b0b-bdea-4986-bd78-a73d2fc0f59e", 00:13:19.822 "strip_size_kb": 0, 00:13:19.822 "state": "online", 00:13:19.822 "raid_level": "raid1", 00:13:19.822 "superblock": true, 00:13:19.822 "num_base_bdevs": 2, 00:13:19.822 "num_base_bdevs_discovered": 2, 00:13:19.822 "num_base_bdevs_operational": 2, 00:13:19.822 "base_bdevs_list": [ 00:13:19.822 { 00:13:19.822 "name": "pt1", 00:13:19.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:19.822 "is_configured": true, 00:13:19.822 "data_offset": 2048, 00:13:19.822 "data_size": 63488 00:13:19.822 }, 00:13:19.822 { 00:13:19.822 "name": "pt2", 00:13:19.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:19.822 "is_configured": true, 00:13:19.822 "data_offset": 2048, 00:13:19.822 "data_size": 63488 00:13:19.822 } 00:13:19.822 ] 00:13:19.822 } 00:13:19.822 } 00:13:19.822 }' 00:13:19.822 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:19.822 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:19.822 pt2' 00:13:19.822 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:19.822 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:19.822 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:20.088 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:20.088 "name": "pt1", 00:13:20.088 "aliases": [ 00:13:20.088 "00000000-0000-0000-0000-000000000001" 00:13:20.088 ], 00:13:20.088 "product_name": "passthru", 00:13:20.088 "block_size": 512, 00:13:20.088 "num_blocks": 65536, 00:13:20.088 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:20.088 "assigned_rate_limits": { 00:13:20.088 "rw_ios_per_sec": 0, 00:13:20.088 "rw_mbytes_per_sec": 0, 00:13:20.088 "r_mbytes_per_sec": 0, 00:13:20.088 "w_mbytes_per_sec": 0 00:13:20.088 }, 00:13:20.088 "claimed": true, 00:13:20.088 "claim_type": "exclusive_write", 00:13:20.088 "zoned": false, 00:13:20.088 "supported_io_types": { 00:13:20.088 "read": true, 00:13:20.088 "write": true, 00:13:20.088 "unmap": true, 00:13:20.088 "flush": true, 00:13:20.088 "reset": true, 00:13:20.088 "nvme_admin": false, 00:13:20.088 "nvme_io": false, 00:13:20.088 "nvme_io_md": false, 00:13:20.088 "write_zeroes": true, 00:13:20.088 "zcopy": true, 00:13:20.088 "get_zone_info": false, 00:13:20.088 "zone_management": false, 00:13:20.088 "zone_append": false, 00:13:20.088 "compare": false, 00:13:20.088 "compare_and_write": false, 00:13:20.088 "abort": true, 00:13:20.088 "seek_hole": false, 00:13:20.088 "seek_data": false, 00:13:20.088 "copy": true, 00:13:20.088 "nvme_iov_md": false 00:13:20.088 }, 00:13:20.088 "memory_domains": [ 00:13:20.088 { 00:13:20.088 "dma_device_id": "system", 00:13:20.088 "dma_device_type": 1 00:13:20.088 }, 00:13:20.088 { 00:13:20.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.088 "dma_device_type": 2 00:13:20.088 } 00:13:20.088 ], 00:13:20.088 "driver_specific": { 00:13:20.088 "passthru": { 00:13:20.088 "name": "pt1", 00:13:20.088 "base_bdev_name": "malloc1" 00:13:20.088 } 00:13:20.088 } 00:13:20.088 }' 00:13:20.088 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.088 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.088 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:20.088 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.345 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.345 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:20.345 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.345 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.345 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:20.345 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.345 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:20.345 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:20.345 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:20.345 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:20.345 13:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:20.603 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:20.603 "name": "pt2", 00:13:20.603 "aliases": [ 00:13:20.603 "00000000-0000-0000-0000-000000000002" 00:13:20.603 ], 00:13:20.603 "product_name": "passthru", 00:13:20.603 "block_size": 512, 00:13:20.603 "num_blocks": 65536, 00:13:20.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:20.603 "assigned_rate_limits": { 00:13:20.603 "rw_ios_per_sec": 0, 00:13:20.603 "rw_mbytes_per_sec": 0, 00:13:20.603 "r_mbytes_per_sec": 0, 00:13:20.603 "w_mbytes_per_sec": 0 00:13:20.603 }, 00:13:20.603 "claimed": true, 00:13:20.603 "claim_type": "exclusive_write", 00:13:20.603 "zoned": false, 00:13:20.603 "supported_io_types": { 00:13:20.603 "read": true, 00:13:20.603 "write": true, 00:13:20.603 "unmap": true, 00:13:20.603 "flush": true, 00:13:20.603 "reset": true, 00:13:20.603 "nvme_admin": false, 00:13:20.603 "nvme_io": false, 00:13:20.603 "nvme_io_md": false, 00:13:20.603 "write_zeroes": true, 00:13:20.603 "zcopy": true, 00:13:20.603 "get_zone_info": false, 00:13:20.603 "zone_management": false, 00:13:20.603 "zone_append": false, 00:13:20.603 "compare": false, 00:13:20.603 "compare_and_write": false, 00:13:20.603 "abort": true, 00:13:20.603 "seek_hole": false, 00:13:20.603 "seek_data": false, 00:13:20.603 "copy": true, 00:13:20.603 "nvme_iov_md": false 00:13:20.603 }, 00:13:20.603 "memory_domains": [ 00:13:20.603 { 00:13:20.603 "dma_device_id": "system", 00:13:20.603 "dma_device_type": 1 00:13:20.603 }, 00:13:20.603 { 00:13:20.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.603 "dma_device_type": 2 00:13:20.603 } 00:13:20.603 ], 00:13:20.603 "driver_specific": { 00:13:20.603 "passthru": { 00:13:20.603 "name": "pt2", 00:13:20.603 "base_bdev_name": "malloc2" 00:13:20.603 } 00:13:20.603 } 00:13:20.603 }' 00:13:20.603 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.603 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:20.861 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:20.861 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.861 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:20.861 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:20.861 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.861 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:20.861 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:20.861 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.119 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.119 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:21.119 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:21.119 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:13:21.119 [2024-07-26 13:13:01.641696] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' dbba3b0b-bdea-4986-bd78-a73d2fc0f59e '!=' dbba3b0b-bdea-4986-bd78-a73d2fc0f59e ']' 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:21.376 [2024-07-26 13:13:01.870103] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:21.376 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:21.377 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:21.377 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:21.647 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.647 13:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.647 13:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:21.647 "name": "raid_bdev1", 00:13:21.647 "uuid": "dbba3b0b-bdea-4986-bd78-a73d2fc0f59e", 00:13:21.647 "strip_size_kb": 0, 00:13:21.647 "state": "online", 00:13:21.647 "raid_level": "raid1", 00:13:21.647 "superblock": true, 00:13:21.647 "num_base_bdevs": 2, 00:13:21.647 "num_base_bdevs_discovered": 1, 00:13:21.647 "num_base_bdevs_operational": 1, 00:13:21.647 "base_bdevs_list": [ 00:13:21.647 { 00:13:21.647 "name": null, 00:13:21.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.647 "is_configured": false, 00:13:21.647 "data_offset": 2048, 00:13:21.647 "data_size": 63488 00:13:21.647 }, 00:13:21.647 { 00:13:21.647 "name": "pt2", 00:13:21.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:21.647 "is_configured": true, 00:13:21.647 "data_offset": 2048, 00:13:21.647 "data_size": 63488 00:13:21.647 } 00:13:21.647 ] 00:13:21.647 }' 00:13:21.647 13:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:21.647 13:13:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.228 13:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:22.487 [2024-07-26 13:13:02.904801] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.487 [2024-07-26 13:13:02.904824] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.487 [2024-07-26 13:13:02.904873] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.487 [2024-07-26 13:13:02.904916] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.487 [2024-07-26 13:13:02.904927] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19c3b70 name raid_bdev1, state offline 00:13:22.487 13:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.487 13:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:13:22.745 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:13:22.745 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:13:22.745 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:13:22.745 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:13:22.745 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:23.004 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:13:23.004 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:13:23.004 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:13:23.004 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:13:23.004 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=1 00:13:23.004 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:23.262 [2024-07-26 13:13:03.586564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:23.262 [2024-07-26 13:13:03.586608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.262 [2024-07-26 13:13:03.586626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x181fe00 00:13:23.262 [2024-07-26 13:13:03.586638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.262 [2024-07-26 13:13:03.588176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.262 [2024-07-26 13:13:03.588211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:23.262 [2024-07-26 13:13:03.588274] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:23.262 [2024-07-26 13:13:03.588298] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:23.262 [2024-07-26 13:13:03.588373] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x18229e0 00:13:23.262 [2024-07-26 13:13:03.588383] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:23.262 [2024-07-26 13:13:03.588536] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x182a960 00:13:23.262 [2024-07-26 13:13:03.588648] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18229e0 00:13:23.262 [2024-07-26 13:13:03.588657] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18229e0 00:13:23.262 [2024-07-26 13:13:03.588750] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.262 pt2 00:13:23.262 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.262 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:23.262 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:23.262 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:23.262 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:23.262 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:23.262 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:23.262 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:23.262 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:23.262 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:23.262 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.262 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.521 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:23.521 "name": "raid_bdev1", 00:13:23.521 "uuid": "dbba3b0b-bdea-4986-bd78-a73d2fc0f59e", 00:13:23.521 "strip_size_kb": 0, 00:13:23.521 "state": "online", 00:13:23.521 "raid_level": "raid1", 00:13:23.521 "superblock": true, 00:13:23.521 "num_base_bdevs": 2, 00:13:23.521 "num_base_bdevs_discovered": 1, 00:13:23.521 "num_base_bdevs_operational": 1, 00:13:23.521 "base_bdevs_list": [ 00:13:23.521 { 00:13:23.521 "name": null, 00:13:23.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.521 "is_configured": false, 00:13:23.521 "data_offset": 2048, 00:13:23.521 "data_size": 63488 00:13:23.521 }, 00:13:23.521 { 00:13:23.521 "name": "pt2", 00:13:23.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:23.521 "is_configured": true, 00:13:23.521 "data_offset": 2048, 00:13:23.521 "data_size": 63488 00:13:23.521 } 00:13:23.521 ] 00:13:23.521 }' 00:13:23.521 13:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:23.521 13:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.087 13:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:24.087 [2024-07-26 13:13:04.613250] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.087 [2024-07-26 13:13:04.613275] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.087 [2024-07-26 13:13:04.613325] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.087 [2024-07-26 13:13:04.613365] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.087 [2024-07-26 13:13:04.613375] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18229e0 name raid_bdev1, state offline 00:13:24.345 13:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.345 13:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:13:24.345 13:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:13:24.345 13:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:13:24.345 13:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:13:24.345 13:13:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:24.604 [2024-07-26 13:13:05.074458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:24.604 [2024-07-26 13:13:05.074506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.604 [2024-07-26 13:13:05.074524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19c39d0 00:13:24.604 [2024-07-26 13:13:05.074536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.604 [2024-07-26 13:13:05.076031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.604 [2024-07-26 13:13:05.076058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:24.604 [2024-07-26 13:13:05.076115] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:24.604 [2024-07-26 13:13:05.076148] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:24.604 [2024-07-26 13:13:05.076244] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:24.604 [2024-07-26 13:13:05.076256] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.604 [2024-07-26 13:13:05.076270] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18212c0 name raid_bdev1, state configuring 00:13:24.604 [2024-07-26 13:13:05.076292] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:24.604 [2024-07-26 13:13:05.076344] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x18226c0 00:13:24.604 [2024-07-26 13:13:05.076353] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:24.604 [2024-07-26 13:13:05.076504] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18225c0 00:13:24.604 [2024-07-26 13:13:05.076618] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18226c0 00:13:24.604 [2024-07-26 13:13:05.076626] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18226c0 00:13:24.604 [2024-07-26 13:13:05.076717] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.604 pt1 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.604 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.862 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:24.862 "name": "raid_bdev1", 00:13:24.862 "uuid": "dbba3b0b-bdea-4986-bd78-a73d2fc0f59e", 00:13:24.862 "strip_size_kb": 0, 00:13:24.862 "state": "online", 00:13:24.862 "raid_level": "raid1", 00:13:24.862 "superblock": true, 00:13:24.862 "num_base_bdevs": 2, 00:13:24.862 "num_base_bdevs_discovered": 1, 00:13:24.862 "num_base_bdevs_operational": 1, 00:13:24.862 "base_bdevs_list": [ 00:13:24.862 { 00:13:24.862 "name": null, 00:13:24.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.862 "is_configured": false, 00:13:24.862 "data_offset": 2048, 00:13:24.862 "data_size": 63488 00:13:24.862 }, 00:13:24.862 { 00:13:24.862 "name": "pt2", 00:13:24.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.862 "is_configured": true, 00:13:24.862 "data_offset": 2048, 00:13:24.862 "data_size": 63488 00:13:24.862 } 00:13:24.862 ] 00:13:24.862 }' 00:13:24.862 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:24.862 13:13:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.428 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:25.428 13:13:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:25.687 13:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:13:25.687 13:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:25.687 13:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:13:25.946 [2024-07-26 13:13:06.354048] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.946 13:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' dbba3b0b-bdea-4986-bd78-a73d2fc0f59e '!=' dbba3b0b-bdea-4986-bd78-a73d2fc0f59e ']' 00:13:25.946 13:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 673918 00:13:25.946 13:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 673918 ']' 00:13:25.946 13:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 673918 00:13:25.946 13:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:25.946 13:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:25.946 13:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 673918 00:13:25.946 13:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:25.946 13:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:25.946 13:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 673918' 00:13:25.946 killing process with pid 673918 00:13:25.946 13:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 673918 00:13:25.946 [2024-07-26 13:13:06.434058] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.946 [2024-07-26 13:13:06.434107] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.946 [2024-07-26 13:13:06.434152] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.946 [2024-07-26 13:13:06.434163] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18226c0 name raid_bdev1, state offline 00:13:25.946 13:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 673918 00:13:25.946 [2024-07-26 13:13:06.450022] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:26.205 13:13:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:13:26.205 00:13:26.205 real 0m14.825s 00:13:26.205 user 0m26.841s 00:13:26.205 sys 0m2.765s 00:13:26.205 13:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:26.205 13:13:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.205 ************************************ 00:13:26.205 END TEST raid_superblock_test 00:13:26.205 ************************************ 00:13:26.205 13:13:06 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:13:26.205 13:13:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:26.205 13:13:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:26.205 13:13:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:26.205 ************************************ 00:13:26.205 START TEST raid_read_error_test 00:13:26.205 ************************************ 00:13:26.205 13:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:13:26.205 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:13:26.205 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:13:26.205 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:13:26.464 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:13:26.464 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:26.464 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.9fdMDajDbN 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=676665 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 676665 /var/tmp/spdk-raid.sock 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 676665 ']' 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:26.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:26.465 13:13:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.465 [2024-07-26 13:13:06.802661] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:13:26.465 [2024-07-26 13:13:06.802719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676665 ] 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:26.465 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:26.465 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:26.465 [2024-07-26 13:13:06.934540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.724 [2024-07-26 13:13:07.021520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.724 [2024-07-26 13:13:07.087655] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.724 [2024-07-26 13:13:07.087690] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.290 13:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:27.290 13:13:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:27.290 13:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:27.290 13:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:27.549 BaseBdev1_malloc 00:13:27.549 13:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:27.807 true 00:13:27.807 13:13:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:28.066 [2024-07-26 13:13:08.358201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:28.066 [2024-07-26 13:13:08.358239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.066 [2024-07-26 13:13:08.358256] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf28190 00:13:28.066 [2024-07-26 13:13:08.358268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.066 [2024-07-26 13:13:08.359870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.066 [2024-07-26 13:13:08.359898] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:28.066 BaseBdev1 00:13:28.066 13:13:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:28.066 13:13:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:28.066 BaseBdev2_malloc 00:13:28.324 13:13:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:28.324 true 00:13:28.324 13:13:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:28.583 [2024-07-26 13:13:09.044390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:28.583 [2024-07-26 13:13:09.044429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.583 [2024-07-26 13:13:09.044447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf2ce20 00:13:28.583 [2024-07-26 13:13:09.044458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.583 [2024-07-26 13:13:09.045859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.583 [2024-07-26 13:13:09.045885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:28.583 BaseBdev2 00:13:28.583 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:28.841 [2024-07-26 13:13:09.268996] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:28.841 [2024-07-26 13:13:09.270168] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.841 [2024-07-26 13:13:09.270327] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xf2ea50 00:13:28.841 [2024-07-26 13:13:09.270340] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:28.841 [2024-07-26 13:13:09.270528] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf31770 00:13:28.841 [2024-07-26 13:13:09.270664] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf2ea50 00:13:28.841 [2024-07-26 13:13:09.270674] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf2ea50 00:13:28.841 [2024-07-26 13:13:09.270779] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.841 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.841 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:28.841 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:28.841 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:28.841 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:28.841 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:28.841 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:28.841 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:28.841 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:28.841 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:28.841 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.841 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.100 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:29.100 "name": "raid_bdev1", 00:13:29.100 "uuid": "5ad24643-7ee1-4de5-9e80-7705f1d4248e", 00:13:29.100 "strip_size_kb": 0, 00:13:29.100 "state": "online", 00:13:29.100 "raid_level": "raid1", 00:13:29.100 "superblock": true, 00:13:29.100 "num_base_bdevs": 2, 00:13:29.100 "num_base_bdevs_discovered": 2, 00:13:29.100 "num_base_bdevs_operational": 2, 00:13:29.100 "base_bdevs_list": [ 00:13:29.100 { 00:13:29.100 "name": "BaseBdev1", 00:13:29.100 "uuid": "20b09d52-4055-505a-b68a-952d852129ad", 00:13:29.100 "is_configured": true, 00:13:29.100 "data_offset": 2048, 00:13:29.100 "data_size": 63488 00:13:29.100 }, 00:13:29.100 { 00:13:29.100 "name": "BaseBdev2", 00:13:29.100 "uuid": "fd9e665c-fc4b-5f06-953f-8aa8e901a184", 00:13:29.100 "is_configured": true, 00:13:29.100 "data_offset": 2048, 00:13:29.100 "data_size": 63488 00:13:29.100 } 00:13:29.100 ] 00:13:29.100 }' 00:13:29.100 13:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:29.100 13:13:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.665 13:13:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:29.665 13:13:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:13:29.665 [2024-07-26 13:13:10.175716] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf2e1b0 00:13:30.599 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.857 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.116 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:31.116 "name": "raid_bdev1", 00:13:31.116 "uuid": "5ad24643-7ee1-4de5-9e80-7705f1d4248e", 00:13:31.116 "strip_size_kb": 0, 00:13:31.116 "state": "online", 00:13:31.116 "raid_level": "raid1", 00:13:31.116 "superblock": true, 00:13:31.116 "num_base_bdevs": 2, 00:13:31.116 "num_base_bdevs_discovered": 2, 00:13:31.116 "num_base_bdevs_operational": 2, 00:13:31.116 "base_bdevs_list": [ 00:13:31.116 { 00:13:31.116 "name": "BaseBdev1", 00:13:31.116 "uuid": "20b09d52-4055-505a-b68a-952d852129ad", 00:13:31.116 "is_configured": true, 00:13:31.116 "data_offset": 2048, 00:13:31.116 "data_size": 63488 00:13:31.116 }, 00:13:31.116 { 00:13:31.116 "name": "BaseBdev2", 00:13:31.116 "uuid": "fd9e665c-fc4b-5f06-953f-8aa8e901a184", 00:13:31.116 "is_configured": true, 00:13:31.116 "data_offset": 2048, 00:13:31.116 "data_size": 63488 00:13:31.116 } 00:13:31.116 ] 00:13:31.116 }' 00:13:31.116 13:13:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:31.116 13:13:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.683 13:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:31.941 [2024-07-26 13:13:12.343795] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:31.941 [2024-07-26 13:13:12.343837] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:31.941 [2024-07-26 13:13:12.346720] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.941 [2024-07-26 13:13:12.346748] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.941 [2024-07-26 13:13:12.346817] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.941 [2024-07-26 13:13:12.346828] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf2ea50 name raid_bdev1, state offline 00:13:31.941 0 00:13:31.941 13:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 676665 00:13:31.941 13:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 676665 ']' 00:13:31.941 13:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 676665 00:13:31.941 13:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:13:31.941 13:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:31.941 13:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 676665 00:13:31.941 13:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:31.941 13:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:31.941 13:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 676665' 00:13:31.941 killing process with pid 676665 00:13:31.941 13:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 676665 00:13:31.941 [2024-07-26 13:13:12.418360] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.941 13:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 676665 00:13:31.941 [2024-07-26 13:13:12.428158] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.201 13:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.9fdMDajDbN 00:13:32.201 13:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:13:32.201 13:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:13:32.201 13:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:13:32.201 13:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:13:32.201 13:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:32.201 13:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:32.201 13:13:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:32.201 00:13:32.201 real 0m5.903s 00:13:32.201 user 0m9.157s 00:13:32.201 sys 0m1.036s 00:13:32.201 13:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.201 13:13:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.201 ************************************ 00:13:32.201 END TEST raid_read_error_test 00:13:32.201 ************************************ 00:13:32.201 13:13:12 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:13:32.201 13:13:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:32.201 13:13:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.201 13:13:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.201 ************************************ 00:13:32.201 START TEST raid_write_error_test 00:13:32.201 ************************************ 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:13:32.201 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:13:32.460 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.QP7b8VHR4m 00:13:32.460 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=677782 00:13:32.460 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 677782 /var/tmp/spdk-raid.sock 00:13:32.460 13:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:32.460 13:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 677782 ']' 00:13:32.460 13:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:32.460 13:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.460 13:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:32.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:32.460 13:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.460 13:13:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.460 [2024-07-26 13:13:12.789602] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:13:32.460 [2024-07-26 13:13:12.789662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677782 ] 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:32.460 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.460 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:32.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:32.461 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:32.461 [2024-07-26 13:13:12.924568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.719 [2024-07-26 13:13:13.007624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.719 [2024-07-26 13:13:13.066708] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.719 [2024-07-26 13:13:13.066745] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.283 13:13:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.283 13:13:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:33.283 13:13:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:33.283 13:13:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:33.541 BaseBdev1_malloc 00:13:33.541 13:13:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:33.799 true 00:13:33.799 13:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:34.057 [2024-07-26 13:13:14.352223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:34.057 [2024-07-26 13:13:14.352267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.057 [2024-07-26 13:13:14.352284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f6e190 00:13:34.057 [2024-07-26 13:13:14.352296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.057 [2024-07-26 13:13:14.353775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.057 [2024-07-26 13:13:14.353803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:34.057 BaseBdev1 00:13:34.057 13:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:13:34.057 13:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:34.315 BaseBdev2_malloc 00:13:34.315 13:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:34.315 true 00:13:34.315 13:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:34.581 [2024-07-26 13:13:14.990197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:34.581 [2024-07-26 13:13:14.990240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.581 [2024-07-26 13:13:14.990259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f72e20 00:13:34.581 [2024-07-26 13:13:14.990270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.581 [2024-07-26 13:13:14.991651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.581 [2024-07-26 13:13:14.991679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:34.581 BaseBdev2 00:13:34.581 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:13:34.867 [2024-07-26 13:13:15.218838] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.867 [2024-07-26 13:13:15.220005] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.867 [2024-07-26 13:13:15.220173] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f74a50 00:13:34.867 [2024-07-26 13:13:15.220186] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:34.867 [2024-07-26 13:13:15.220378] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f77770 00:13:34.867 [2024-07-26 13:13:15.220519] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f74a50 00:13:34.867 [2024-07-26 13:13:15.220529] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f74a50 00:13:34.867 [2024-07-26 13:13:15.220634] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.867 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:34.867 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:34.867 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:34.867 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:34.867 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:34.867 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:34.867 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:34.867 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:34.867 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:34.867 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:34.867 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.867 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.139 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:35.139 "name": "raid_bdev1", 00:13:35.139 "uuid": "f9c9bad5-f993-433e-858b-7aa9f6b6320c", 00:13:35.139 "strip_size_kb": 0, 00:13:35.139 "state": "online", 00:13:35.139 "raid_level": "raid1", 00:13:35.139 "superblock": true, 00:13:35.139 "num_base_bdevs": 2, 00:13:35.139 "num_base_bdevs_discovered": 2, 00:13:35.139 "num_base_bdevs_operational": 2, 00:13:35.139 "base_bdevs_list": [ 00:13:35.139 { 00:13:35.139 "name": "BaseBdev1", 00:13:35.139 "uuid": "5b7a299e-d301-5ac6-9129-6dc4fce55353", 00:13:35.139 "is_configured": true, 00:13:35.139 "data_offset": 2048, 00:13:35.139 "data_size": 63488 00:13:35.139 }, 00:13:35.139 { 00:13:35.139 "name": "BaseBdev2", 00:13:35.139 "uuid": "8fd0dde5-35b9-5ac9-9706-e20a2c364c24", 00:13:35.139 "is_configured": true, 00:13:35.139 "data_offset": 2048, 00:13:35.139 "data_size": 63488 00:13:35.139 } 00:13:35.139 ] 00:13:35.139 }' 00:13:35.139 13:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:35.139 13:13:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.705 13:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:13:35.705 13:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:35.705 [2024-07-26 13:13:16.137711] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f741b0 00:13:36.640 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:36.898 [2024-07-26 13:13:17.256649] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:36.898 [2024-07-26 13:13:17.256701] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:36.898 [2024-07-26 13:13:17.256874] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1f741b0 00:13:36.898 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:13:36.898 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:36.898 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:13:36.898 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=1 00:13:36.898 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:36.898 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:36.898 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:36.898 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:13:36.898 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:13:36.898 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:36.899 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:36.899 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:36.899 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:36.899 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:36.899 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.899 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.158 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:37.158 "name": "raid_bdev1", 00:13:37.158 "uuid": "f9c9bad5-f993-433e-858b-7aa9f6b6320c", 00:13:37.158 "strip_size_kb": 0, 00:13:37.158 "state": "online", 00:13:37.158 "raid_level": "raid1", 00:13:37.158 "superblock": true, 00:13:37.158 "num_base_bdevs": 2, 00:13:37.158 "num_base_bdevs_discovered": 1, 00:13:37.158 "num_base_bdevs_operational": 1, 00:13:37.158 "base_bdevs_list": [ 00:13:37.158 { 00:13:37.158 "name": null, 00:13:37.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.158 "is_configured": false, 00:13:37.158 "data_offset": 2048, 00:13:37.158 "data_size": 63488 00:13:37.158 }, 00:13:37.158 { 00:13:37.158 "name": "BaseBdev2", 00:13:37.158 "uuid": "8fd0dde5-35b9-5ac9-9706-e20a2c364c24", 00:13:37.158 "is_configured": true, 00:13:37.158 "data_offset": 2048, 00:13:37.158 "data_size": 63488 00:13:37.158 } 00:13:37.158 ] 00:13:37.158 }' 00:13:37.158 13:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:37.158 13:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.725 13:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:37.983 [2024-07-26 13:13:18.303421] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.983 [2024-07-26 13:13:18.303456] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.983 [2024-07-26 13:13:18.306331] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.983 [2024-07-26 13:13:18.306357] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.983 [2024-07-26 13:13:18.306408] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.983 [2024-07-26 13:13:18.306419] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f74a50 name raid_bdev1, state offline 00:13:37.983 0 00:13:37.983 13:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 677782 00:13:37.983 13:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 677782 ']' 00:13:37.983 13:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 677782 00:13:37.983 13:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:37.983 13:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.983 13:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 677782 00:13:37.983 13:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.983 13:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.983 13:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 677782' 00:13:37.983 killing process with pid 677782 00:13:37.983 13:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 677782 00:13:37.983 [2024-07-26 13:13:18.383081] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.983 13:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 677782 00:13:37.983 [2024-07-26 13:13:18.392472] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:38.242 13:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:13:38.242 13:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.QP7b8VHR4m 00:13:38.242 13:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:13:38.242 13:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:13:38.242 13:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:13:38.242 13:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:38.242 13:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:13:38.242 13:13:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:38.242 00:13:38.242 real 0m5.886s 00:13:38.243 user 0m9.082s 00:13:38.243 sys 0m1.095s 00:13:38.243 13:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:38.243 13:13:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.243 ************************************ 00:13:38.243 END TEST raid_write_error_test 00:13:38.243 ************************************ 00:13:38.243 13:13:18 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:13:38.243 13:13:18 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:13:38.243 13:13:18 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:13:38.243 13:13:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:38.243 13:13:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:38.243 13:13:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:38.243 ************************************ 00:13:38.243 START TEST raid_state_function_test 00:13:38.243 ************************************ 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=678938 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 678938' 00:13:38.243 Process raid pid: 678938 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 678938 /var/tmp/spdk-raid.sock 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 678938 ']' 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:38.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:38.243 13:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.243 [2024-07-26 13:13:18.759545] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:13:38.243 [2024-07-26 13:13:18.759605] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.502 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:38.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:38.503 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:38.503 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:38.503 [2024-07-26 13:13:18.893336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.503 [2024-07-26 13:13:18.979973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.761 [2024-07-26 13:13:19.038748] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.761 [2024-07-26 13:13:19.038780] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.327 13:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:39.327 13:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:39.327 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:39.586 [2024-07-26 13:13:19.874498] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.586 [2024-07-26 13:13:19.874537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.586 [2024-07-26 13:13:19.874547] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.586 [2024-07-26 13:13:19.874557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.586 [2024-07-26 13:13:19.874565] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.586 [2024-07-26 13:13:19.874575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.586 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:39.586 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:39.586 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:39.586 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:39.586 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:39.586 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:39.586 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:39.586 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:39.586 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:39.586 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:39.586 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:39.586 13:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.845 13:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:39.845 "name": "Existed_Raid", 00:13:39.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.845 "strip_size_kb": 64, 00:13:39.845 "state": "configuring", 00:13:39.845 "raid_level": "raid0", 00:13:39.845 "superblock": false, 00:13:39.845 "num_base_bdevs": 3, 00:13:39.845 "num_base_bdevs_discovered": 0, 00:13:39.845 "num_base_bdevs_operational": 3, 00:13:39.845 "base_bdevs_list": [ 00:13:39.845 { 00:13:39.845 "name": "BaseBdev1", 00:13:39.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.845 "is_configured": false, 00:13:39.845 "data_offset": 0, 00:13:39.845 "data_size": 0 00:13:39.845 }, 00:13:39.845 { 00:13:39.845 "name": "BaseBdev2", 00:13:39.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.845 "is_configured": false, 00:13:39.845 "data_offset": 0, 00:13:39.845 "data_size": 0 00:13:39.845 }, 00:13:39.845 { 00:13:39.845 "name": "BaseBdev3", 00:13:39.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.845 "is_configured": false, 00:13:39.845 "data_offset": 0, 00:13:39.845 "data_size": 0 00:13:39.845 } 00:13:39.845 ] 00:13:39.845 }' 00:13:39.845 13:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:39.845 13:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.411 13:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:40.411 [2024-07-26 13:13:20.937182] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:40.411 [2024-07-26 13:13:20.937221] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a0f40 name Existed_Raid, state configuring 00:13:40.670 13:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:40.670 [2024-07-26 13:13:21.165786] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.670 [2024-07-26 13:13:21.165815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.670 [2024-07-26 13:13:21.165824] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.670 [2024-07-26 13:13:21.165835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.670 [2024-07-26 13:13:21.165843] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.670 [2024-07-26 13:13:21.165853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.670 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:40.929 [2024-07-26 13:13:21.399692] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.929 BaseBdev1 00:13:40.929 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:40.929 13:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:40.929 13:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:40.929 13:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:40.929 13:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:40.929 13:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:40.929 13:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:41.187 13:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:41.445 [ 00:13:41.445 { 00:13:41.445 "name": "BaseBdev1", 00:13:41.445 "aliases": [ 00:13:41.445 "117c5832-cd65-40a1-81e3-1e8826618c60" 00:13:41.445 ], 00:13:41.445 "product_name": "Malloc disk", 00:13:41.445 "block_size": 512, 00:13:41.445 "num_blocks": 65536, 00:13:41.445 "uuid": "117c5832-cd65-40a1-81e3-1e8826618c60", 00:13:41.445 "assigned_rate_limits": { 00:13:41.445 "rw_ios_per_sec": 0, 00:13:41.445 "rw_mbytes_per_sec": 0, 00:13:41.445 "r_mbytes_per_sec": 0, 00:13:41.445 "w_mbytes_per_sec": 0 00:13:41.445 }, 00:13:41.445 "claimed": true, 00:13:41.445 "claim_type": "exclusive_write", 00:13:41.445 "zoned": false, 00:13:41.445 "supported_io_types": { 00:13:41.445 "read": true, 00:13:41.445 "write": true, 00:13:41.445 "unmap": true, 00:13:41.445 "flush": true, 00:13:41.445 "reset": true, 00:13:41.445 "nvme_admin": false, 00:13:41.445 "nvme_io": false, 00:13:41.445 "nvme_io_md": false, 00:13:41.445 "write_zeroes": true, 00:13:41.445 "zcopy": true, 00:13:41.445 "get_zone_info": false, 00:13:41.445 "zone_management": false, 00:13:41.445 "zone_append": false, 00:13:41.445 "compare": false, 00:13:41.445 "compare_and_write": false, 00:13:41.446 "abort": true, 00:13:41.446 "seek_hole": false, 00:13:41.446 "seek_data": false, 00:13:41.446 "copy": true, 00:13:41.446 "nvme_iov_md": false 00:13:41.446 }, 00:13:41.446 "memory_domains": [ 00:13:41.446 { 00:13:41.446 "dma_device_id": "system", 00:13:41.446 "dma_device_type": 1 00:13:41.446 }, 00:13:41.446 { 00:13:41.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.446 "dma_device_type": 2 00:13:41.446 } 00:13:41.446 ], 00:13:41.446 "driver_specific": {} 00:13:41.446 } 00:13:41.446 ] 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.446 13:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.705 13:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:41.705 "name": "Existed_Raid", 00:13:41.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.705 "strip_size_kb": 64, 00:13:41.705 "state": "configuring", 00:13:41.705 "raid_level": "raid0", 00:13:41.705 "superblock": false, 00:13:41.705 "num_base_bdevs": 3, 00:13:41.705 "num_base_bdevs_discovered": 1, 00:13:41.705 "num_base_bdevs_operational": 3, 00:13:41.705 "base_bdevs_list": [ 00:13:41.705 { 00:13:41.705 "name": "BaseBdev1", 00:13:41.705 "uuid": "117c5832-cd65-40a1-81e3-1e8826618c60", 00:13:41.705 "is_configured": true, 00:13:41.705 "data_offset": 0, 00:13:41.705 "data_size": 65536 00:13:41.705 }, 00:13:41.705 { 00:13:41.705 "name": "BaseBdev2", 00:13:41.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.705 "is_configured": false, 00:13:41.705 "data_offset": 0, 00:13:41.705 "data_size": 0 00:13:41.705 }, 00:13:41.705 { 00:13:41.705 "name": "BaseBdev3", 00:13:41.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.705 "is_configured": false, 00:13:41.705 "data_offset": 0, 00:13:41.705 "data_size": 0 00:13:41.705 } 00:13:41.705 ] 00:13:41.705 }' 00:13:41.705 13:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:41.705 13:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.271 13:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:42.530 [2024-07-26 13:13:22.871578] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:42.530 [2024-07-26 13:13:22.871620] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a0810 name Existed_Raid, state configuring 00:13:42.530 13:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:42.788 [2024-07-26 13:13:23.096212] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:42.788 [2024-07-26 13:13:23.097606] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:42.788 [2024-07-26 13:13:23.097637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:42.788 [2024-07-26 13:13:23.097647] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:42.788 [2024-07-26 13:13:23.097657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.788 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.046 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:43.046 "name": "Existed_Raid", 00:13:43.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.046 "strip_size_kb": 64, 00:13:43.046 "state": "configuring", 00:13:43.046 "raid_level": "raid0", 00:13:43.046 "superblock": false, 00:13:43.046 "num_base_bdevs": 3, 00:13:43.046 "num_base_bdevs_discovered": 1, 00:13:43.046 "num_base_bdevs_operational": 3, 00:13:43.046 "base_bdevs_list": [ 00:13:43.046 { 00:13:43.046 "name": "BaseBdev1", 00:13:43.046 "uuid": "117c5832-cd65-40a1-81e3-1e8826618c60", 00:13:43.046 "is_configured": true, 00:13:43.046 "data_offset": 0, 00:13:43.046 "data_size": 65536 00:13:43.046 }, 00:13:43.046 { 00:13:43.046 "name": "BaseBdev2", 00:13:43.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.046 "is_configured": false, 00:13:43.046 "data_offset": 0, 00:13:43.046 "data_size": 0 00:13:43.046 }, 00:13:43.046 { 00:13:43.046 "name": "BaseBdev3", 00:13:43.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.046 "is_configured": false, 00:13:43.046 "data_offset": 0, 00:13:43.046 "data_size": 0 00:13:43.046 } 00:13:43.046 ] 00:13:43.046 }' 00:13:43.046 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:43.046 13:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.612 13:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:43.612 [2024-07-26 13:13:24.114083] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:43.612 BaseBdev2 00:13:43.612 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:43.612 13:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:43.612 13:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:43.612 13:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:43.612 13:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:43.612 13:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:43.612 13:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:43.870 13:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:44.129 [ 00:13:44.129 { 00:13:44.129 "name": "BaseBdev2", 00:13:44.129 "aliases": [ 00:13:44.129 "91428ec5-2306-4e11-85e6-30fa1b827250" 00:13:44.129 ], 00:13:44.129 "product_name": "Malloc disk", 00:13:44.129 "block_size": 512, 00:13:44.129 "num_blocks": 65536, 00:13:44.129 "uuid": "91428ec5-2306-4e11-85e6-30fa1b827250", 00:13:44.129 "assigned_rate_limits": { 00:13:44.129 "rw_ios_per_sec": 0, 00:13:44.129 "rw_mbytes_per_sec": 0, 00:13:44.129 "r_mbytes_per_sec": 0, 00:13:44.129 "w_mbytes_per_sec": 0 00:13:44.129 }, 00:13:44.129 "claimed": true, 00:13:44.129 "claim_type": "exclusive_write", 00:13:44.129 "zoned": false, 00:13:44.129 "supported_io_types": { 00:13:44.129 "read": true, 00:13:44.129 "write": true, 00:13:44.129 "unmap": true, 00:13:44.129 "flush": true, 00:13:44.129 "reset": true, 00:13:44.129 "nvme_admin": false, 00:13:44.129 "nvme_io": false, 00:13:44.129 "nvme_io_md": false, 00:13:44.129 "write_zeroes": true, 00:13:44.129 "zcopy": true, 00:13:44.129 "get_zone_info": false, 00:13:44.129 "zone_management": false, 00:13:44.129 "zone_append": false, 00:13:44.129 "compare": false, 00:13:44.129 "compare_and_write": false, 00:13:44.129 "abort": true, 00:13:44.129 "seek_hole": false, 00:13:44.129 "seek_data": false, 00:13:44.129 "copy": true, 00:13:44.129 "nvme_iov_md": false 00:13:44.129 }, 00:13:44.129 "memory_domains": [ 00:13:44.129 { 00:13:44.129 "dma_device_id": "system", 00:13:44.129 "dma_device_type": 1 00:13:44.129 }, 00:13:44.129 { 00:13:44.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.129 "dma_device_type": 2 00:13:44.129 } 00:13:44.129 ], 00:13:44.129 "driver_specific": {} 00:13:44.129 } 00:13:44.129 ] 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.129 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.388 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:44.388 "name": "Existed_Raid", 00:13:44.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.388 "strip_size_kb": 64, 00:13:44.388 "state": "configuring", 00:13:44.388 "raid_level": "raid0", 00:13:44.388 "superblock": false, 00:13:44.388 "num_base_bdevs": 3, 00:13:44.388 "num_base_bdevs_discovered": 2, 00:13:44.388 "num_base_bdevs_operational": 3, 00:13:44.388 "base_bdevs_list": [ 00:13:44.388 { 00:13:44.388 "name": "BaseBdev1", 00:13:44.388 "uuid": "117c5832-cd65-40a1-81e3-1e8826618c60", 00:13:44.388 "is_configured": true, 00:13:44.388 "data_offset": 0, 00:13:44.388 "data_size": 65536 00:13:44.388 }, 00:13:44.388 { 00:13:44.388 "name": "BaseBdev2", 00:13:44.388 "uuid": "91428ec5-2306-4e11-85e6-30fa1b827250", 00:13:44.388 "is_configured": true, 00:13:44.388 "data_offset": 0, 00:13:44.388 "data_size": 65536 00:13:44.388 }, 00:13:44.388 { 00:13:44.388 "name": "BaseBdev3", 00:13:44.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.388 "is_configured": false, 00:13:44.388 "data_offset": 0, 00:13:44.388 "data_size": 0 00:13:44.388 } 00:13:44.388 ] 00:13:44.388 }' 00:13:44.388 13:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:44.388 13:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.954 13:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:45.212 [2024-07-26 13:13:25.585218] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.212 [2024-07-26 13:13:25.585248] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x10a1710 00:13:45.212 [2024-07-26 13:13:25.585256] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:45.212 [2024-07-26 13:13:25.585433] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10a13e0 00:13:45.212 [2024-07-26 13:13:25.585543] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10a1710 00:13:45.212 [2024-07-26 13:13:25.585552] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10a1710 00:13:45.212 [2024-07-26 13:13:25.585697] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.212 BaseBdev3 00:13:45.212 13:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:45.212 13:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:45.212 13:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:45.212 13:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:45.212 13:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:45.212 13:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:45.212 13:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:45.470 13:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:45.728 [ 00:13:45.728 { 00:13:45.728 "name": "BaseBdev3", 00:13:45.728 "aliases": [ 00:13:45.728 "f92508e9-dc18-4809-a3c6-820177f1f1bf" 00:13:45.728 ], 00:13:45.728 "product_name": "Malloc disk", 00:13:45.728 "block_size": 512, 00:13:45.728 "num_blocks": 65536, 00:13:45.728 "uuid": "f92508e9-dc18-4809-a3c6-820177f1f1bf", 00:13:45.728 "assigned_rate_limits": { 00:13:45.728 "rw_ios_per_sec": 0, 00:13:45.728 "rw_mbytes_per_sec": 0, 00:13:45.728 "r_mbytes_per_sec": 0, 00:13:45.728 "w_mbytes_per_sec": 0 00:13:45.728 }, 00:13:45.728 "claimed": true, 00:13:45.728 "claim_type": "exclusive_write", 00:13:45.728 "zoned": false, 00:13:45.728 "supported_io_types": { 00:13:45.728 "read": true, 00:13:45.728 "write": true, 00:13:45.728 "unmap": true, 00:13:45.728 "flush": true, 00:13:45.728 "reset": true, 00:13:45.728 "nvme_admin": false, 00:13:45.728 "nvme_io": false, 00:13:45.728 "nvme_io_md": false, 00:13:45.728 "write_zeroes": true, 00:13:45.728 "zcopy": true, 00:13:45.728 "get_zone_info": false, 00:13:45.728 "zone_management": false, 00:13:45.728 "zone_append": false, 00:13:45.728 "compare": false, 00:13:45.728 "compare_and_write": false, 00:13:45.728 "abort": true, 00:13:45.728 "seek_hole": false, 00:13:45.728 "seek_data": false, 00:13:45.728 "copy": true, 00:13:45.728 "nvme_iov_md": false 00:13:45.728 }, 00:13:45.728 "memory_domains": [ 00:13:45.728 { 00:13:45.728 "dma_device_id": "system", 00:13:45.728 "dma_device_type": 1 00:13:45.728 }, 00:13:45.728 { 00:13:45.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.728 "dma_device_type": 2 00:13:45.728 } 00:13:45.728 ], 00:13:45.728 "driver_specific": {} 00:13:45.728 } 00:13:45.728 ] 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.728 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.986 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:45.986 "name": "Existed_Raid", 00:13:45.986 "uuid": "575ae08c-128d-45d4-a2a2-605e0fb65116", 00:13:45.986 "strip_size_kb": 64, 00:13:45.986 "state": "online", 00:13:45.986 "raid_level": "raid0", 00:13:45.986 "superblock": false, 00:13:45.986 "num_base_bdevs": 3, 00:13:45.986 "num_base_bdevs_discovered": 3, 00:13:45.986 "num_base_bdevs_operational": 3, 00:13:45.986 "base_bdevs_list": [ 00:13:45.986 { 00:13:45.986 "name": "BaseBdev1", 00:13:45.986 "uuid": "117c5832-cd65-40a1-81e3-1e8826618c60", 00:13:45.986 "is_configured": true, 00:13:45.986 "data_offset": 0, 00:13:45.986 "data_size": 65536 00:13:45.986 }, 00:13:45.986 { 00:13:45.986 "name": "BaseBdev2", 00:13:45.986 "uuid": "91428ec5-2306-4e11-85e6-30fa1b827250", 00:13:45.986 "is_configured": true, 00:13:45.986 "data_offset": 0, 00:13:45.986 "data_size": 65536 00:13:45.986 }, 00:13:45.986 { 00:13:45.986 "name": "BaseBdev3", 00:13:45.987 "uuid": "f92508e9-dc18-4809-a3c6-820177f1f1bf", 00:13:45.987 "is_configured": true, 00:13:45.987 "data_offset": 0, 00:13:45.987 "data_size": 65536 00:13:45.987 } 00:13:45.987 ] 00:13:45.987 }' 00:13:45.987 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:45.987 13:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.553 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:46.553 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:46.553 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:46.553 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:46.553 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:46.553 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:46.553 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:46.553 13:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:46.553 [2024-07-26 13:13:27.061418] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.811 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:46.811 "name": "Existed_Raid", 00:13:46.811 "aliases": [ 00:13:46.811 "575ae08c-128d-45d4-a2a2-605e0fb65116" 00:13:46.811 ], 00:13:46.811 "product_name": "Raid Volume", 00:13:46.811 "block_size": 512, 00:13:46.811 "num_blocks": 196608, 00:13:46.811 "uuid": "575ae08c-128d-45d4-a2a2-605e0fb65116", 00:13:46.811 "assigned_rate_limits": { 00:13:46.811 "rw_ios_per_sec": 0, 00:13:46.811 "rw_mbytes_per_sec": 0, 00:13:46.811 "r_mbytes_per_sec": 0, 00:13:46.811 "w_mbytes_per_sec": 0 00:13:46.811 }, 00:13:46.811 "claimed": false, 00:13:46.811 "zoned": false, 00:13:46.811 "supported_io_types": { 00:13:46.811 "read": true, 00:13:46.811 "write": true, 00:13:46.811 "unmap": true, 00:13:46.811 "flush": true, 00:13:46.811 "reset": true, 00:13:46.811 "nvme_admin": false, 00:13:46.811 "nvme_io": false, 00:13:46.811 "nvme_io_md": false, 00:13:46.811 "write_zeroes": true, 00:13:46.811 "zcopy": false, 00:13:46.811 "get_zone_info": false, 00:13:46.811 "zone_management": false, 00:13:46.811 "zone_append": false, 00:13:46.811 "compare": false, 00:13:46.811 "compare_and_write": false, 00:13:46.811 "abort": false, 00:13:46.811 "seek_hole": false, 00:13:46.811 "seek_data": false, 00:13:46.811 "copy": false, 00:13:46.811 "nvme_iov_md": false 00:13:46.811 }, 00:13:46.811 "memory_domains": [ 00:13:46.811 { 00:13:46.811 "dma_device_id": "system", 00:13:46.811 "dma_device_type": 1 00:13:46.811 }, 00:13:46.811 { 00:13:46.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.811 "dma_device_type": 2 00:13:46.811 }, 00:13:46.811 { 00:13:46.811 "dma_device_id": "system", 00:13:46.811 "dma_device_type": 1 00:13:46.811 }, 00:13:46.811 { 00:13:46.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.811 "dma_device_type": 2 00:13:46.811 }, 00:13:46.811 { 00:13:46.811 "dma_device_id": "system", 00:13:46.811 "dma_device_type": 1 00:13:46.811 }, 00:13:46.811 { 00:13:46.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.811 "dma_device_type": 2 00:13:46.811 } 00:13:46.811 ], 00:13:46.811 "driver_specific": { 00:13:46.811 "raid": { 00:13:46.811 "uuid": "575ae08c-128d-45d4-a2a2-605e0fb65116", 00:13:46.811 "strip_size_kb": 64, 00:13:46.811 "state": "online", 00:13:46.811 "raid_level": "raid0", 00:13:46.811 "superblock": false, 00:13:46.811 "num_base_bdevs": 3, 00:13:46.811 "num_base_bdevs_discovered": 3, 00:13:46.811 "num_base_bdevs_operational": 3, 00:13:46.811 "base_bdevs_list": [ 00:13:46.811 { 00:13:46.811 "name": "BaseBdev1", 00:13:46.811 "uuid": "117c5832-cd65-40a1-81e3-1e8826618c60", 00:13:46.811 "is_configured": true, 00:13:46.811 "data_offset": 0, 00:13:46.811 "data_size": 65536 00:13:46.811 }, 00:13:46.811 { 00:13:46.811 "name": "BaseBdev2", 00:13:46.811 "uuid": "91428ec5-2306-4e11-85e6-30fa1b827250", 00:13:46.811 "is_configured": true, 00:13:46.811 "data_offset": 0, 00:13:46.811 "data_size": 65536 00:13:46.811 }, 00:13:46.811 { 00:13:46.811 "name": "BaseBdev3", 00:13:46.811 "uuid": "f92508e9-dc18-4809-a3c6-820177f1f1bf", 00:13:46.811 "is_configured": true, 00:13:46.811 "data_offset": 0, 00:13:46.811 "data_size": 65536 00:13:46.811 } 00:13:46.811 ] 00:13:46.811 } 00:13:46.811 } 00:13:46.811 }' 00:13:46.811 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.811 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:46.811 BaseBdev2 00:13:46.811 BaseBdev3' 00:13:46.812 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:46.812 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:46.812 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:47.069 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:47.069 "name": "BaseBdev1", 00:13:47.069 "aliases": [ 00:13:47.069 "117c5832-cd65-40a1-81e3-1e8826618c60" 00:13:47.069 ], 00:13:47.069 "product_name": "Malloc disk", 00:13:47.069 "block_size": 512, 00:13:47.069 "num_blocks": 65536, 00:13:47.069 "uuid": "117c5832-cd65-40a1-81e3-1e8826618c60", 00:13:47.069 "assigned_rate_limits": { 00:13:47.069 "rw_ios_per_sec": 0, 00:13:47.069 "rw_mbytes_per_sec": 0, 00:13:47.069 "r_mbytes_per_sec": 0, 00:13:47.069 "w_mbytes_per_sec": 0 00:13:47.069 }, 00:13:47.069 "claimed": true, 00:13:47.069 "claim_type": "exclusive_write", 00:13:47.069 "zoned": false, 00:13:47.069 "supported_io_types": { 00:13:47.069 "read": true, 00:13:47.069 "write": true, 00:13:47.070 "unmap": true, 00:13:47.070 "flush": true, 00:13:47.070 "reset": true, 00:13:47.070 "nvme_admin": false, 00:13:47.070 "nvme_io": false, 00:13:47.070 "nvme_io_md": false, 00:13:47.070 "write_zeroes": true, 00:13:47.070 "zcopy": true, 00:13:47.070 "get_zone_info": false, 00:13:47.070 "zone_management": false, 00:13:47.070 "zone_append": false, 00:13:47.070 "compare": false, 00:13:47.070 "compare_and_write": false, 00:13:47.070 "abort": true, 00:13:47.070 "seek_hole": false, 00:13:47.070 "seek_data": false, 00:13:47.070 "copy": true, 00:13:47.070 "nvme_iov_md": false 00:13:47.070 }, 00:13:47.070 "memory_domains": [ 00:13:47.070 { 00:13:47.070 "dma_device_id": "system", 00:13:47.070 "dma_device_type": 1 00:13:47.070 }, 00:13:47.070 { 00:13:47.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.070 "dma_device_type": 2 00:13:47.070 } 00:13:47.070 ], 00:13:47.070 "driver_specific": {} 00:13:47.070 }' 00:13:47.070 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:47.070 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:47.070 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:47.070 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:47.070 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:47.070 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:47.070 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:47.070 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:47.328 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:47.328 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:47.328 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:47.328 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:47.328 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:47.328 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:47.328 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:47.586 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:47.586 "name": "BaseBdev2", 00:13:47.586 "aliases": [ 00:13:47.586 "91428ec5-2306-4e11-85e6-30fa1b827250" 00:13:47.586 ], 00:13:47.586 "product_name": "Malloc disk", 00:13:47.586 "block_size": 512, 00:13:47.586 "num_blocks": 65536, 00:13:47.586 "uuid": "91428ec5-2306-4e11-85e6-30fa1b827250", 00:13:47.586 "assigned_rate_limits": { 00:13:47.586 "rw_ios_per_sec": 0, 00:13:47.586 "rw_mbytes_per_sec": 0, 00:13:47.586 "r_mbytes_per_sec": 0, 00:13:47.586 "w_mbytes_per_sec": 0 00:13:47.586 }, 00:13:47.586 "claimed": true, 00:13:47.586 "claim_type": "exclusive_write", 00:13:47.586 "zoned": false, 00:13:47.586 "supported_io_types": { 00:13:47.586 "read": true, 00:13:47.586 "write": true, 00:13:47.586 "unmap": true, 00:13:47.586 "flush": true, 00:13:47.586 "reset": true, 00:13:47.586 "nvme_admin": false, 00:13:47.586 "nvme_io": false, 00:13:47.586 "nvme_io_md": false, 00:13:47.586 "write_zeroes": true, 00:13:47.586 "zcopy": true, 00:13:47.586 "get_zone_info": false, 00:13:47.586 "zone_management": false, 00:13:47.586 "zone_append": false, 00:13:47.586 "compare": false, 00:13:47.586 "compare_and_write": false, 00:13:47.586 "abort": true, 00:13:47.586 "seek_hole": false, 00:13:47.586 "seek_data": false, 00:13:47.586 "copy": true, 00:13:47.586 "nvme_iov_md": false 00:13:47.586 }, 00:13:47.586 "memory_domains": [ 00:13:47.586 { 00:13:47.586 "dma_device_id": "system", 00:13:47.586 "dma_device_type": 1 00:13:47.586 }, 00:13:47.586 { 00:13:47.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.586 "dma_device_type": 2 00:13:47.586 } 00:13:47.586 ], 00:13:47.586 "driver_specific": {} 00:13:47.586 }' 00:13:47.586 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:47.586 13:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:47.586 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:47.586 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:47.586 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:47.586 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:47.586 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:47.863 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:47.863 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:47.863 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:47.863 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:47.863 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:47.863 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:47.863 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:47.863 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:48.136 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:48.136 "name": "BaseBdev3", 00:13:48.136 "aliases": [ 00:13:48.136 "f92508e9-dc18-4809-a3c6-820177f1f1bf" 00:13:48.136 ], 00:13:48.136 "product_name": "Malloc disk", 00:13:48.136 "block_size": 512, 00:13:48.136 "num_blocks": 65536, 00:13:48.136 "uuid": "f92508e9-dc18-4809-a3c6-820177f1f1bf", 00:13:48.136 "assigned_rate_limits": { 00:13:48.136 "rw_ios_per_sec": 0, 00:13:48.136 "rw_mbytes_per_sec": 0, 00:13:48.136 "r_mbytes_per_sec": 0, 00:13:48.136 "w_mbytes_per_sec": 0 00:13:48.136 }, 00:13:48.136 "claimed": true, 00:13:48.136 "claim_type": "exclusive_write", 00:13:48.136 "zoned": false, 00:13:48.136 "supported_io_types": { 00:13:48.136 "read": true, 00:13:48.136 "write": true, 00:13:48.136 "unmap": true, 00:13:48.136 "flush": true, 00:13:48.136 "reset": true, 00:13:48.136 "nvme_admin": false, 00:13:48.136 "nvme_io": false, 00:13:48.136 "nvme_io_md": false, 00:13:48.136 "write_zeroes": true, 00:13:48.136 "zcopy": true, 00:13:48.136 "get_zone_info": false, 00:13:48.136 "zone_management": false, 00:13:48.136 "zone_append": false, 00:13:48.136 "compare": false, 00:13:48.136 "compare_and_write": false, 00:13:48.137 "abort": true, 00:13:48.137 "seek_hole": false, 00:13:48.137 "seek_data": false, 00:13:48.137 "copy": true, 00:13:48.137 "nvme_iov_md": false 00:13:48.137 }, 00:13:48.137 "memory_domains": [ 00:13:48.137 { 00:13:48.137 "dma_device_id": "system", 00:13:48.137 "dma_device_type": 1 00:13:48.137 }, 00:13:48.137 { 00:13:48.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.137 "dma_device_type": 2 00:13:48.137 } 00:13:48.137 ], 00:13:48.137 "driver_specific": {} 00:13:48.137 }' 00:13:48.137 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:48.137 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:48.137 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:48.137 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:48.137 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:48.395 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:48.395 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:48.395 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:48.395 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:48.395 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:48.395 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:48.395 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:48.395 13:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:48.653 [2024-07-26 13:13:29.030399] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.653 [2024-07-26 13:13:29.030423] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.653 [2024-07-26 13:13:29.030462] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.653 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:48.653 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:48.653 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:48.653 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:48.653 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:48.653 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:48.653 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:48.653 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:48.654 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:48.654 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:48.654 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:48.654 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.654 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.654 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.654 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.654 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.654 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.912 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.912 "name": "Existed_Raid", 00:13:48.912 "uuid": "575ae08c-128d-45d4-a2a2-605e0fb65116", 00:13:48.912 "strip_size_kb": 64, 00:13:48.912 "state": "offline", 00:13:48.912 "raid_level": "raid0", 00:13:48.912 "superblock": false, 00:13:48.912 "num_base_bdevs": 3, 00:13:48.913 "num_base_bdevs_discovered": 2, 00:13:48.913 "num_base_bdevs_operational": 2, 00:13:48.913 "base_bdevs_list": [ 00:13:48.913 { 00:13:48.913 "name": null, 00:13:48.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.913 "is_configured": false, 00:13:48.913 "data_offset": 0, 00:13:48.913 "data_size": 65536 00:13:48.913 }, 00:13:48.913 { 00:13:48.913 "name": "BaseBdev2", 00:13:48.913 "uuid": "91428ec5-2306-4e11-85e6-30fa1b827250", 00:13:48.913 "is_configured": true, 00:13:48.913 "data_offset": 0, 00:13:48.913 "data_size": 65536 00:13:48.913 }, 00:13:48.913 { 00:13:48.913 "name": "BaseBdev3", 00:13:48.913 "uuid": "f92508e9-dc18-4809-a3c6-820177f1f1bf", 00:13:48.913 "is_configured": true, 00:13:48.913 "data_offset": 0, 00:13:48.913 "data_size": 65536 00:13:48.913 } 00:13:48.913 ] 00:13:48.913 }' 00:13:48.913 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.913 13:13:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.479 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:49.479 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:49.479 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.479 13:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:49.737 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:49.737 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:49.737 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:49.995 [2024-07-26 13:13:30.286742] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:49.995 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:49.995 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:49.995 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.995 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:50.253 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:50.253 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:50.253 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:50.253 [2024-07-26 13:13:30.757984] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:50.253 [2024-07-26 13:13:30.758020] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a1710 name Existed_Raid, state offline 00:13:50.511 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:50.511 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:50.511 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.511 13:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:50.511 13:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:50.511 13:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:50.511 13:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:13:50.511 13:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:50.511 13:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:50.511 13:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:50.770 BaseBdev2 00:13:50.770 13:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:50.770 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:50.770 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:50.770 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:50.770 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:50.770 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:50.770 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:51.028 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:51.286 [ 00:13:51.286 { 00:13:51.286 "name": "BaseBdev2", 00:13:51.286 "aliases": [ 00:13:51.286 "57a97d99-09ad-4d9c-9729-aaa4095c0459" 00:13:51.286 ], 00:13:51.286 "product_name": "Malloc disk", 00:13:51.286 "block_size": 512, 00:13:51.286 "num_blocks": 65536, 00:13:51.286 "uuid": "57a97d99-09ad-4d9c-9729-aaa4095c0459", 00:13:51.286 "assigned_rate_limits": { 00:13:51.286 "rw_ios_per_sec": 0, 00:13:51.286 "rw_mbytes_per_sec": 0, 00:13:51.286 "r_mbytes_per_sec": 0, 00:13:51.286 "w_mbytes_per_sec": 0 00:13:51.286 }, 00:13:51.286 "claimed": false, 00:13:51.286 "zoned": false, 00:13:51.286 "supported_io_types": { 00:13:51.286 "read": true, 00:13:51.286 "write": true, 00:13:51.286 "unmap": true, 00:13:51.286 "flush": true, 00:13:51.286 "reset": true, 00:13:51.286 "nvme_admin": false, 00:13:51.286 "nvme_io": false, 00:13:51.286 "nvme_io_md": false, 00:13:51.286 "write_zeroes": true, 00:13:51.286 "zcopy": true, 00:13:51.286 "get_zone_info": false, 00:13:51.286 "zone_management": false, 00:13:51.286 "zone_append": false, 00:13:51.286 "compare": false, 00:13:51.286 "compare_and_write": false, 00:13:51.286 "abort": true, 00:13:51.286 "seek_hole": false, 00:13:51.286 "seek_data": false, 00:13:51.286 "copy": true, 00:13:51.286 "nvme_iov_md": false 00:13:51.286 }, 00:13:51.286 "memory_domains": [ 00:13:51.286 { 00:13:51.286 "dma_device_id": "system", 00:13:51.286 "dma_device_type": 1 00:13:51.286 }, 00:13:51.286 { 00:13:51.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.286 "dma_device_type": 2 00:13:51.286 } 00:13:51.286 ], 00:13:51.286 "driver_specific": {} 00:13:51.286 } 00:13:51.286 ] 00:13:51.286 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:51.286 13:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:51.287 13:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:51.287 13:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:51.545 BaseBdev3 00:13:51.545 13:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:51.545 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:51.545 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:51.545 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:51.545 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:51.545 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:51.545 13:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:51.803 13:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:52.061 [ 00:13:52.061 { 00:13:52.061 "name": "BaseBdev3", 00:13:52.061 "aliases": [ 00:13:52.061 "469a1c73-c626-472b-a2ab-cdf17442084c" 00:13:52.061 ], 00:13:52.061 "product_name": "Malloc disk", 00:13:52.061 "block_size": 512, 00:13:52.061 "num_blocks": 65536, 00:13:52.061 "uuid": "469a1c73-c626-472b-a2ab-cdf17442084c", 00:13:52.061 "assigned_rate_limits": { 00:13:52.061 "rw_ios_per_sec": 0, 00:13:52.061 "rw_mbytes_per_sec": 0, 00:13:52.061 "r_mbytes_per_sec": 0, 00:13:52.061 "w_mbytes_per_sec": 0 00:13:52.061 }, 00:13:52.061 "claimed": false, 00:13:52.061 "zoned": false, 00:13:52.061 "supported_io_types": { 00:13:52.061 "read": true, 00:13:52.061 "write": true, 00:13:52.061 "unmap": true, 00:13:52.061 "flush": true, 00:13:52.061 "reset": true, 00:13:52.061 "nvme_admin": false, 00:13:52.061 "nvme_io": false, 00:13:52.061 "nvme_io_md": false, 00:13:52.061 "write_zeroes": true, 00:13:52.061 "zcopy": true, 00:13:52.061 "get_zone_info": false, 00:13:52.061 "zone_management": false, 00:13:52.061 "zone_append": false, 00:13:52.061 "compare": false, 00:13:52.061 "compare_and_write": false, 00:13:52.061 "abort": true, 00:13:52.061 "seek_hole": false, 00:13:52.061 "seek_data": false, 00:13:52.061 "copy": true, 00:13:52.061 "nvme_iov_md": false 00:13:52.061 }, 00:13:52.061 "memory_domains": [ 00:13:52.061 { 00:13:52.061 "dma_device_id": "system", 00:13:52.061 "dma_device_type": 1 00:13:52.061 }, 00:13:52.061 { 00:13:52.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.061 "dma_device_type": 2 00:13:52.061 } 00:13:52.061 ], 00:13:52.061 "driver_specific": {} 00:13:52.061 } 00:13:52.061 ] 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:13:52.061 [2024-07-26 13:13:32.557027] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.061 [2024-07-26 13:13:32.557062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.061 [2024-07-26 13:13:32.557080] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.061 [2024-07-26 13:13:32.558302] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.061 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.319 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:52.319 "name": "Existed_Raid", 00:13:52.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.319 "strip_size_kb": 64, 00:13:52.319 "state": "configuring", 00:13:52.319 "raid_level": "raid0", 00:13:52.319 "superblock": false, 00:13:52.319 "num_base_bdevs": 3, 00:13:52.319 "num_base_bdevs_discovered": 2, 00:13:52.319 "num_base_bdevs_operational": 3, 00:13:52.319 "base_bdevs_list": [ 00:13:52.319 { 00:13:52.319 "name": "BaseBdev1", 00:13:52.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.319 "is_configured": false, 00:13:52.319 "data_offset": 0, 00:13:52.319 "data_size": 0 00:13:52.319 }, 00:13:52.319 { 00:13:52.319 "name": "BaseBdev2", 00:13:52.319 "uuid": "57a97d99-09ad-4d9c-9729-aaa4095c0459", 00:13:52.319 "is_configured": true, 00:13:52.319 "data_offset": 0, 00:13:52.319 "data_size": 65536 00:13:52.319 }, 00:13:52.319 { 00:13:52.319 "name": "BaseBdev3", 00:13:52.319 "uuid": "469a1c73-c626-472b-a2ab-cdf17442084c", 00:13:52.319 "is_configured": true, 00:13:52.319 "data_offset": 0, 00:13:52.319 "data_size": 65536 00:13:52.319 } 00:13:52.319 ] 00:13:52.319 }' 00:13:52.319 13:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:52.319 13:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.884 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:53.142 [2024-07-26 13:13:33.579698] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:53.142 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:53.142 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:53.142 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:53.142 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:53.142 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:53.142 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:53.142 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:53.142 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:53.142 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:53.142 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:53.142 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.142 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.400 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:53.400 "name": "Existed_Raid", 00:13:53.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.400 "strip_size_kb": 64, 00:13:53.400 "state": "configuring", 00:13:53.400 "raid_level": "raid0", 00:13:53.400 "superblock": false, 00:13:53.400 "num_base_bdevs": 3, 00:13:53.400 "num_base_bdevs_discovered": 1, 00:13:53.400 "num_base_bdevs_operational": 3, 00:13:53.400 "base_bdevs_list": [ 00:13:53.400 { 00:13:53.400 "name": "BaseBdev1", 00:13:53.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.400 "is_configured": false, 00:13:53.400 "data_offset": 0, 00:13:53.400 "data_size": 0 00:13:53.400 }, 00:13:53.400 { 00:13:53.400 "name": null, 00:13:53.400 "uuid": "57a97d99-09ad-4d9c-9729-aaa4095c0459", 00:13:53.400 "is_configured": false, 00:13:53.400 "data_offset": 0, 00:13:53.400 "data_size": 65536 00:13:53.400 }, 00:13:53.400 { 00:13:53.400 "name": "BaseBdev3", 00:13:53.400 "uuid": "469a1c73-c626-472b-a2ab-cdf17442084c", 00:13:53.400 "is_configured": true, 00:13:53.400 "data_offset": 0, 00:13:53.400 "data_size": 65536 00:13:53.400 } 00:13:53.400 ] 00:13:53.400 }' 00:13:53.400 13:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:53.400 13:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.967 13:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:53.967 13:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:54.225 13:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:54.225 13:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:54.483 [2024-07-26 13:13:34.834429] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:54.483 BaseBdev1 00:13:54.483 13:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:54.483 13:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:54.483 13:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:54.483 13:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:54.483 13:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:54.483 13:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:54.483 13:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:54.741 13:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:55.000 [ 00:13:55.000 { 00:13:55.000 "name": "BaseBdev1", 00:13:55.000 "aliases": [ 00:13:55.000 "f5acabbb-ff8a-4574-84e8-874bbc439e4d" 00:13:55.000 ], 00:13:55.000 "product_name": "Malloc disk", 00:13:55.000 "block_size": 512, 00:13:55.000 "num_blocks": 65536, 00:13:55.000 "uuid": "f5acabbb-ff8a-4574-84e8-874bbc439e4d", 00:13:55.000 "assigned_rate_limits": { 00:13:55.000 "rw_ios_per_sec": 0, 00:13:55.000 "rw_mbytes_per_sec": 0, 00:13:55.000 "r_mbytes_per_sec": 0, 00:13:55.000 "w_mbytes_per_sec": 0 00:13:55.000 }, 00:13:55.000 "claimed": true, 00:13:55.000 "claim_type": "exclusive_write", 00:13:55.000 "zoned": false, 00:13:55.000 "supported_io_types": { 00:13:55.000 "read": true, 00:13:55.000 "write": true, 00:13:55.000 "unmap": true, 00:13:55.000 "flush": true, 00:13:55.000 "reset": true, 00:13:55.000 "nvme_admin": false, 00:13:55.000 "nvme_io": false, 00:13:55.000 "nvme_io_md": false, 00:13:55.000 "write_zeroes": true, 00:13:55.000 "zcopy": true, 00:13:55.000 "get_zone_info": false, 00:13:55.000 "zone_management": false, 00:13:55.000 "zone_append": false, 00:13:55.000 "compare": false, 00:13:55.000 "compare_and_write": false, 00:13:55.000 "abort": true, 00:13:55.000 "seek_hole": false, 00:13:55.000 "seek_data": false, 00:13:55.000 "copy": true, 00:13:55.000 "nvme_iov_md": false 00:13:55.000 }, 00:13:55.000 "memory_domains": [ 00:13:55.000 { 00:13:55.000 "dma_device_id": "system", 00:13:55.000 "dma_device_type": 1 00:13:55.000 }, 00:13:55.000 { 00:13:55.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.000 "dma_device_type": 2 00:13:55.000 } 00:13:55.000 ], 00:13:55.000 "driver_specific": {} 00:13:55.000 } 00:13:55.000 ] 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.000 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.259 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:55.259 "name": "Existed_Raid", 00:13:55.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.259 "strip_size_kb": 64, 00:13:55.259 "state": "configuring", 00:13:55.259 "raid_level": "raid0", 00:13:55.259 "superblock": false, 00:13:55.259 "num_base_bdevs": 3, 00:13:55.259 "num_base_bdevs_discovered": 2, 00:13:55.259 "num_base_bdevs_operational": 3, 00:13:55.259 "base_bdevs_list": [ 00:13:55.259 { 00:13:55.259 "name": "BaseBdev1", 00:13:55.259 "uuid": "f5acabbb-ff8a-4574-84e8-874bbc439e4d", 00:13:55.259 "is_configured": true, 00:13:55.259 "data_offset": 0, 00:13:55.259 "data_size": 65536 00:13:55.259 }, 00:13:55.259 { 00:13:55.259 "name": null, 00:13:55.259 "uuid": "57a97d99-09ad-4d9c-9729-aaa4095c0459", 00:13:55.259 "is_configured": false, 00:13:55.259 "data_offset": 0, 00:13:55.259 "data_size": 65536 00:13:55.259 }, 00:13:55.259 { 00:13:55.259 "name": "BaseBdev3", 00:13:55.259 "uuid": "469a1c73-c626-472b-a2ab-cdf17442084c", 00:13:55.259 "is_configured": true, 00:13:55.259 "data_offset": 0, 00:13:55.259 "data_size": 65536 00:13:55.259 } 00:13:55.259 ] 00:13:55.259 }' 00:13:55.259 13:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:55.259 13:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.825 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.825 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:55.825 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:55.825 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:56.099 [2024-07-26 13:13:36.538939] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:56.099 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:56.099 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:56.099 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:56.099 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:56.099 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:56.099 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:56.099 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.099 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.099 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.099 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.099 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.099 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.360 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.360 "name": "Existed_Raid", 00:13:56.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.360 "strip_size_kb": 64, 00:13:56.360 "state": "configuring", 00:13:56.360 "raid_level": "raid0", 00:13:56.360 "superblock": false, 00:13:56.360 "num_base_bdevs": 3, 00:13:56.360 "num_base_bdevs_discovered": 1, 00:13:56.360 "num_base_bdevs_operational": 3, 00:13:56.360 "base_bdevs_list": [ 00:13:56.360 { 00:13:56.360 "name": "BaseBdev1", 00:13:56.360 "uuid": "f5acabbb-ff8a-4574-84e8-874bbc439e4d", 00:13:56.360 "is_configured": true, 00:13:56.360 "data_offset": 0, 00:13:56.360 "data_size": 65536 00:13:56.360 }, 00:13:56.360 { 00:13:56.360 "name": null, 00:13:56.360 "uuid": "57a97d99-09ad-4d9c-9729-aaa4095c0459", 00:13:56.360 "is_configured": false, 00:13:56.360 "data_offset": 0, 00:13:56.360 "data_size": 65536 00:13:56.360 }, 00:13:56.360 { 00:13:56.360 "name": null, 00:13:56.360 "uuid": "469a1c73-c626-472b-a2ab-cdf17442084c", 00:13:56.360 "is_configured": false, 00:13:56.360 "data_offset": 0, 00:13:56.360 "data_size": 65536 00:13:56.360 } 00:13:56.360 ] 00:13:56.360 }' 00:13:56.360 13:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.360 13:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.926 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.926 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:57.198 [2024-07-26 13:13:37.661922] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.198 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.461 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:57.461 "name": "Existed_Raid", 00:13:57.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.461 "strip_size_kb": 64, 00:13:57.461 "state": "configuring", 00:13:57.461 "raid_level": "raid0", 00:13:57.461 "superblock": false, 00:13:57.461 "num_base_bdevs": 3, 00:13:57.461 "num_base_bdevs_discovered": 2, 00:13:57.461 "num_base_bdevs_operational": 3, 00:13:57.461 "base_bdevs_list": [ 00:13:57.461 { 00:13:57.461 "name": "BaseBdev1", 00:13:57.461 "uuid": "f5acabbb-ff8a-4574-84e8-874bbc439e4d", 00:13:57.461 "is_configured": true, 00:13:57.461 "data_offset": 0, 00:13:57.461 "data_size": 65536 00:13:57.461 }, 00:13:57.461 { 00:13:57.461 "name": null, 00:13:57.461 "uuid": "57a97d99-09ad-4d9c-9729-aaa4095c0459", 00:13:57.461 "is_configured": false, 00:13:57.461 "data_offset": 0, 00:13:57.461 "data_size": 65536 00:13:57.461 }, 00:13:57.461 { 00:13:57.461 "name": "BaseBdev3", 00:13:57.461 "uuid": "469a1c73-c626-472b-a2ab-cdf17442084c", 00:13:57.461 "is_configured": true, 00:13:57.461 "data_offset": 0, 00:13:57.461 "data_size": 65536 00:13:57.461 } 00:13:57.461 ] 00:13:57.461 }' 00:13:57.461 13:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:57.461 13:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.026 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:58.026 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.284 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:58.284 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:58.542 [2024-07-26 13:13:38.849063] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.542 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:58.542 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:58.542 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:58.542 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:58.542 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:58.542 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:58.542 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.542 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.542 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.542 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.542 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.542 13:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.799 13:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.799 "name": "Existed_Raid", 00:13:58.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.799 "strip_size_kb": 64, 00:13:58.799 "state": "configuring", 00:13:58.799 "raid_level": "raid0", 00:13:58.799 "superblock": false, 00:13:58.799 "num_base_bdevs": 3, 00:13:58.799 "num_base_bdevs_discovered": 1, 00:13:58.799 "num_base_bdevs_operational": 3, 00:13:58.799 "base_bdevs_list": [ 00:13:58.799 { 00:13:58.799 "name": null, 00:13:58.799 "uuid": "f5acabbb-ff8a-4574-84e8-874bbc439e4d", 00:13:58.799 "is_configured": false, 00:13:58.799 "data_offset": 0, 00:13:58.799 "data_size": 65536 00:13:58.799 }, 00:13:58.799 { 00:13:58.799 "name": null, 00:13:58.799 "uuid": "57a97d99-09ad-4d9c-9729-aaa4095c0459", 00:13:58.799 "is_configured": false, 00:13:58.799 "data_offset": 0, 00:13:58.799 "data_size": 65536 00:13:58.799 }, 00:13:58.799 { 00:13:58.799 "name": "BaseBdev3", 00:13:58.799 "uuid": "469a1c73-c626-472b-a2ab-cdf17442084c", 00:13:58.799 "is_configured": true, 00:13:58.799 "data_offset": 0, 00:13:58.799 "data_size": 65536 00:13:58.799 } 00:13:58.799 ] 00:13:58.799 }' 00:13:58.799 13:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.799 13:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.366 13:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.366 13:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:59.366 13:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:59.366 13:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:59.625 [2024-07-26 13:13:40.002067] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.625 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:59.625 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:59.625 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:59.625 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:59.625 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:59.625 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:59.625 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:59.625 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:59.625 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:59.625 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:59.625 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.625 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.883 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:59.883 "name": "Existed_Raid", 00:13:59.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.883 "strip_size_kb": 64, 00:13:59.883 "state": "configuring", 00:13:59.883 "raid_level": "raid0", 00:13:59.883 "superblock": false, 00:13:59.883 "num_base_bdevs": 3, 00:13:59.883 "num_base_bdevs_discovered": 2, 00:13:59.883 "num_base_bdevs_operational": 3, 00:13:59.883 "base_bdevs_list": [ 00:13:59.883 { 00:13:59.883 "name": null, 00:13:59.883 "uuid": "f5acabbb-ff8a-4574-84e8-874bbc439e4d", 00:13:59.883 "is_configured": false, 00:13:59.883 "data_offset": 0, 00:13:59.883 "data_size": 65536 00:13:59.883 }, 00:13:59.883 { 00:13:59.883 "name": "BaseBdev2", 00:13:59.883 "uuid": "57a97d99-09ad-4d9c-9729-aaa4095c0459", 00:13:59.883 "is_configured": true, 00:13:59.883 "data_offset": 0, 00:13:59.883 "data_size": 65536 00:13:59.883 }, 00:13:59.883 { 00:13:59.883 "name": "BaseBdev3", 00:13:59.883 "uuid": "469a1c73-c626-472b-a2ab-cdf17442084c", 00:13:59.883 "is_configured": true, 00:13:59.883 "data_offset": 0, 00:13:59.883 "data_size": 65536 00:13:59.883 } 00:13:59.883 ] 00:13:59.883 }' 00:13:59.883 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:59.883 13:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.450 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.450 13:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:00.711 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:00.711 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.711 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:01.014 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f5acabbb-ff8a-4574-84e8-874bbc439e4d 00:14:01.014 [2024-07-26 13:13:41.404918] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:01.014 [2024-07-26 13:13:41.404949] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x10976f0 00:14:01.014 [2024-07-26 13:13:41.404957] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:01.014 [2024-07-26 13:13:41.405125] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1098430 00:14:01.014 [2024-07-26 13:13:41.405242] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10976f0 00:14:01.014 [2024-07-26 13:13:41.405251] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10976f0 00:14:01.014 [2024-07-26 13:13:41.405398] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.014 NewBaseBdev 00:14:01.014 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:01.014 13:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:01.014 13:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:01.014 13:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:01.014 13:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:01.014 13:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:01.014 13:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.287 13:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:01.545 [ 00:14:01.545 { 00:14:01.545 "name": "NewBaseBdev", 00:14:01.545 "aliases": [ 00:14:01.545 "f5acabbb-ff8a-4574-84e8-874bbc439e4d" 00:14:01.545 ], 00:14:01.545 "product_name": "Malloc disk", 00:14:01.545 "block_size": 512, 00:14:01.545 "num_blocks": 65536, 00:14:01.545 "uuid": "f5acabbb-ff8a-4574-84e8-874bbc439e4d", 00:14:01.545 "assigned_rate_limits": { 00:14:01.545 "rw_ios_per_sec": 0, 00:14:01.545 "rw_mbytes_per_sec": 0, 00:14:01.545 "r_mbytes_per_sec": 0, 00:14:01.545 "w_mbytes_per_sec": 0 00:14:01.545 }, 00:14:01.545 "claimed": true, 00:14:01.545 "claim_type": "exclusive_write", 00:14:01.545 "zoned": false, 00:14:01.545 "supported_io_types": { 00:14:01.545 "read": true, 00:14:01.545 "write": true, 00:14:01.545 "unmap": true, 00:14:01.545 "flush": true, 00:14:01.545 "reset": true, 00:14:01.545 "nvme_admin": false, 00:14:01.545 "nvme_io": false, 00:14:01.545 "nvme_io_md": false, 00:14:01.545 "write_zeroes": true, 00:14:01.545 "zcopy": true, 00:14:01.545 "get_zone_info": false, 00:14:01.545 "zone_management": false, 00:14:01.545 "zone_append": false, 00:14:01.545 "compare": false, 00:14:01.545 "compare_and_write": false, 00:14:01.545 "abort": true, 00:14:01.545 "seek_hole": false, 00:14:01.545 "seek_data": false, 00:14:01.545 "copy": true, 00:14:01.545 "nvme_iov_md": false 00:14:01.545 }, 00:14:01.545 "memory_domains": [ 00:14:01.545 { 00:14:01.545 "dma_device_id": "system", 00:14:01.545 "dma_device_type": 1 00:14:01.545 }, 00:14:01.545 { 00:14:01.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.545 "dma_device_type": 2 00:14:01.545 } 00:14:01.545 ], 00:14:01.545 "driver_specific": {} 00:14:01.545 } 00:14:01.545 ] 00:14:01.545 13:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:01.545 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:01.545 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.545 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:01.545 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:01.545 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.545 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:01.545 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.546 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.546 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.546 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.546 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.546 13:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.804 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.804 "name": "Existed_Raid", 00:14:01.804 "uuid": "6b83bdc4-12ce-4b8c-8f69-3fd515315f5e", 00:14:01.804 "strip_size_kb": 64, 00:14:01.804 "state": "online", 00:14:01.804 "raid_level": "raid0", 00:14:01.804 "superblock": false, 00:14:01.804 "num_base_bdevs": 3, 00:14:01.804 "num_base_bdevs_discovered": 3, 00:14:01.804 "num_base_bdevs_operational": 3, 00:14:01.804 "base_bdevs_list": [ 00:14:01.804 { 00:14:01.804 "name": "NewBaseBdev", 00:14:01.804 "uuid": "f5acabbb-ff8a-4574-84e8-874bbc439e4d", 00:14:01.804 "is_configured": true, 00:14:01.804 "data_offset": 0, 00:14:01.804 "data_size": 65536 00:14:01.804 }, 00:14:01.804 { 00:14:01.804 "name": "BaseBdev2", 00:14:01.804 "uuid": "57a97d99-09ad-4d9c-9729-aaa4095c0459", 00:14:01.804 "is_configured": true, 00:14:01.804 "data_offset": 0, 00:14:01.804 "data_size": 65536 00:14:01.804 }, 00:14:01.804 { 00:14:01.804 "name": "BaseBdev3", 00:14:01.804 "uuid": "469a1c73-c626-472b-a2ab-cdf17442084c", 00:14:01.804 "is_configured": true, 00:14:01.804 "data_offset": 0, 00:14:01.804 "data_size": 65536 00:14:01.804 } 00:14:01.804 ] 00:14:01.804 }' 00:14:01.804 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.804 13:13:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.371 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:02.371 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:02.371 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:02.371 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:02.371 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:02.371 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:02.371 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:02.371 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:02.629 [2024-07-26 13:13:42.905161] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.629 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:02.629 "name": "Existed_Raid", 00:14:02.629 "aliases": [ 00:14:02.629 "6b83bdc4-12ce-4b8c-8f69-3fd515315f5e" 00:14:02.629 ], 00:14:02.629 "product_name": "Raid Volume", 00:14:02.629 "block_size": 512, 00:14:02.629 "num_blocks": 196608, 00:14:02.629 "uuid": "6b83bdc4-12ce-4b8c-8f69-3fd515315f5e", 00:14:02.629 "assigned_rate_limits": { 00:14:02.629 "rw_ios_per_sec": 0, 00:14:02.629 "rw_mbytes_per_sec": 0, 00:14:02.629 "r_mbytes_per_sec": 0, 00:14:02.629 "w_mbytes_per_sec": 0 00:14:02.629 }, 00:14:02.629 "claimed": false, 00:14:02.629 "zoned": false, 00:14:02.629 "supported_io_types": { 00:14:02.629 "read": true, 00:14:02.629 "write": true, 00:14:02.629 "unmap": true, 00:14:02.629 "flush": true, 00:14:02.629 "reset": true, 00:14:02.629 "nvme_admin": false, 00:14:02.629 "nvme_io": false, 00:14:02.629 "nvme_io_md": false, 00:14:02.629 "write_zeroes": true, 00:14:02.629 "zcopy": false, 00:14:02.629 "get_zone_info": false, 00:14:02.629 "zone_management": false, 00:14:02.629 "zone_append": false, 00:14:02.629 "compare": false, 00:14:02.629 "compare_and_write": false, 00:14:02.629 "abort": false, 00:14:02.629 "seek_hole": false, 00:14:02.629 "seek_data": false, 00:14:02.629 "copy": false, 00:14:02.629 "nvme_iov_md": false 00:14:02.629 }, 00:14:02.629 "memory_domains": [ 00:14:02.629 { 00:14:02.629 "dma_device_id": "system", 00:14:02.629 "dma_device_type": 1 00:14:02.629 }, 00:14:02.629 { 00:14:02.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.629 "dma_device_type": 2 00:14:02.629 }, 00:14:02.629 { 00:14:02.629 "dma_device_id": "system", 00:14:02.629 "dma_device_type": 1 00:14:02.629 }, 00:14:02.629 { 00:14:02.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.629 "dma_device_type": 2 00:14:02.629 }, 00:14:02.629 { 00:14:02.629 "dma_device_id": "system", 00:14:02.629 "dma_device_type": 1 00:14:02.629 }, 00:14:02.629 { 00:14:02.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.629 "dma_device_type": 2 00:14:02.629 } 00:14:02.629 ], 00:14:02.629 "driver_specific": { 00:14:02.629 "raid": { 00:14:02.629 "uuid": "6b83bdc4-12ce-4b8c-8f69-3fd515315f5e", 00:14:02.629 "strip_size_kb": 64, 00:14:02.629 "state": "online", 00:14:02.629 "raid_level": "raid0", 00:14:02.629 "superblock": false, 00:14:02.629 "num_base_bdevs": 3, 00:14:02.629 "num_base_bdevs_discovered": 3, 00:14:02.629 "num_base_bdevs_operational": 3, 00:14:02.629 "base_bdevs_list": [ 00:14:02.629 { 00:14:02.629 "name": "NewBaseBdev", 00:14:02.629 "uuid": "f5acabbb-ff8a-4574-84e8-874bbc439e4d", 00:14:02.629 "is_configured": true, 00:14:02.629 "data_offset": 0, 00:14:02.629 "data_size": 65536 00:14:02.629 }, 00:14:02.629 { 00:14:02.630 "name": "BaseBdev2", 00:14:02.630 "uuid": "57a97d99-09ad-4d9c-9729-aaa4095c0459", 00:14:02.630 "is_configured": true, 00:14:02.630 "data_offset": 0, 00:14:02.630 "data_size": 65536 00:14:02.630 }, 00:14:02.630 { 00:14:02.630 "name": "BaseBdev3", 00:14:02.630 "uuid": "469a1c73-c626-472b-a2ab-cdf17442084c", 00:14:02.630 "is_configured": true, 00:14:02.630 "data_offset": 0, 00:14:02.630 "data_size": 65536 00:14:02.630 } 00:14:02.630 ] 00:14:02.630 } 00:14:02.630 } 00:14:02.630 }' 00:14:02.630 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:02.630 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:02.630 BaseBdev2 00:14:02.630 BaseBdev3' 00:14:02.630 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.630 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:02.630 13:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:02.905 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:02.905 "name": "NewBaseBdev", 00:14:02.905 "aliases": [ 00:14:02.905 "f5acabbb-ff8a-4574-84e8-874bbc439e4d" 00:14:02.905 ], 00:14:02.905 "product_name": "Malloc disk", 00:14:02.905 "block_size": 512, 00:14:02.905 "num_blocks": 65536, 00:14:02.905 "uuid": "f5acabbb-ff8a-4574-84e8-874bbc439e4d", 00:14:02.905 "assigned_rate_limits": { 00:14:02.905 "rw_ios_per_sec": 0, 00:14:02.905 "rw_mbytes_per_sec": 0, 00:14:02.905 "r_mbytes_per_sec": 0, 00:14:02.905 "w_mbytes_per_sec": 0 00:14:02.905 }, 00:14:02.905 "claimed": true, 00:14:02.905 "claim_type": "exclusive_write", 00:14:02.905 "zoned": false, 00:14:02.905 "supported_io_types": { 00:14:02.905 "read": true, 00:14:02.905 "write": true, 00:14:02.905 "unmap": true, 00:14:02.905 "flush": true, 00:14:02.905 "reset": true, 00:14:02.905 "nvme_admin": false, 00:14:02.905 "nvme_io": false, 00:14:02.905 "nvme_io_md": false, 00:14:02.905 "write_zeroes": true, 00:14:02.905 "zcopy": true, 00:14:02.905 "get_zone_info": false, 00:14:02.905 "zone_management": false, 00:14:02.905 "zone_append": false, 00:14:02.905 "compare": false, 00:14:02.905 "compare_and_write": false, 00:14:02.905 "abort": true, 00:14:02.905 "seek_hole": false, 00:14:02.905 "seek_data": false, 00:14:02.905 "copy": true, 00:14:02.905 "nvme_iov_md": false 00:14:02.905 }, 00:14:02.905 "memory_domains": [ 00:14:02.905 { 00:14:02.905 "dma_device_id": "system", 00:14:02.905 "dma_device_type": 1 00:14:02.905 }, 00:14:02.905 { 00:14:02.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.905 "dma_device_type": 2 00:14:02.905 } 00:14:02.905 ], 00:14:02.905 "driver_specific": {} 00:14:02.905 }' 00:14:02.905 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.905 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.905 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:02.905 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.905 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.905 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:02.905 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.905 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.172 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.172 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.172 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.172 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.172 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:03.172 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:03.172 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:03.430 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:03.430 "name": "BaseBdev2", 00:14:03.430 "aliases": [ 00:14:03.430 "57a97d99-09ad-4d9c-9729-aaa4095c0459" 00:14:03.430 ], 00:14:03.430 "product_name": "Malloc disk", 00:14:03.430 "block_size": 512, 00:14:03.430 "num_blocks": 65536, 00:14:03.430 "uuid": "57a97d99-09ad-4d9c-9729-aaa4095c0459", 00:14:03.430 "assigned_rate_limits": { 00:14:03.430 "rw_ios_per_sec": 0, 00:14:03.430 "rw_mbytes_per_sec": 0, 00:14:03.430 "r_mbytes_per_sec": 0, 00:14:03.430 "w_mbytes_per_sec": 0 00:14:03.430 }, 00:14:03.430 "claimed": true, 00:14:03.430 "claim_type": "exclusive_write", 00:14:03.430 "zoned": false, 00:14:03.430 "supported_io_types": { 00:14:03.430 "read": true, 00:14:03.430 "write": true, 00:14:03.430 "unmap": true, 00:14:03.430 "flush": true, 00:14:03.430 "reset": true, 00:14:03.430 "nvme_admin": false, 00:14:03.430 "nvme_io": false, 00:14:03.430 "nvme_io_md": false, 00:14:03.430 "write_zeroes": true, 00:14:03.430 "zcopy": true, 00:14:03.430 "get_zone_info": false, 00:14:03.430 "zone_management": false, 00:14:03.430 "zone_append": false, 00:14:03.430 "compare": false, 00:14:03.430 "compare_and_write": false, 00:14:03.430 "abort": true, 00:14:03.430 "seek_hole": false, 00:14:03.430 "seek_data": false, 00:14:03.430 "copy": true, 00:14:03.430 "nvme_iov_md": false 00:14:03.430 }, 00:14:03.430 "memory_domains": [ 00:14:03.430 { 00:14:03.430 "dma_device_id": "system", 00:14:03.430 "dma_device_type": 1 00:14:03.430 }, 00:14:03.430 { 00:14:03.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.430 "dma_device_type": 2 00:14:03.430 } 00:14:03.430 ], 00:14:03.430 "driver_specific": {} 00:14:03.430 }' 00:14:03.430 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.430 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.430 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:03.430 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.430 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.430 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:03.430 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.689 13:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:03.689 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:03.689 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.689 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:03.689 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:03.689 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:03.689 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:03.689 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:03.947 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:03.947 "name": "BaseBdev3", 00:14:03.947 "aliases": [ 00:14:03.947 "469a1c73-c626-472b-a2ab-cdf17442084c" 00:14:03.947 ], 00:14:03.947 "product_name": "Malloc disk", 00:14:03.947 "block_size": 512, 00:14:03.947 "num_blocks": 65536, 00:14:03.947 "uuid": "469a1c73-c626-472b-a2ab-cdf17442084c", 00:14:03.947 "assigned_rate_limits": { 00:14:03.947 "rw_ios_per_sec": 0, 00:14:03.947 "rw_mbytes_per_sec": 0, 00:14:03.947 "r_mbytes_per_sec": 0, 00:14:03.947 "w_mbytes_per_sec": 0 00:14:03.947 }, 00:14:03.947 "claimed": true, 00:14:03.947 "claim_type": "exclusive_write", 00:14:03.947 "zoned": false, 00:14:03.947 "supported_io_types": { 00:14:03.947 "read": true, 00:14:03.947 "write": true, 00:14:03.947 "unmap": true, 00:14:03.947 "flush": true, 00:14:03.947 "reset": true, 00:14:03.947 "nvme_admin": false, 00:14:03.947 "nvme_io": false, 00:14:03.947 "nvme_io_md": false, 00:14:03.947 "write_zeroes": true, 00:14:03.947 "zcopy": true, 00:14:03.947 "get_zone_info": false, 00:14:03.947 "zone_management": false, 00:14:03.947 "zone_append": false, 00:14:03.947 "compare": false, 00:14:03.947 "compare_and_write": false, 00:14:03.947 "abort": true, 00:14:03.947 "seek_hole": false, 00:14:03.947 "seek_data": false, 00:14:03.947 "copy": true, 00:14:03.947 "nvme_iov_md": false 00:14:03.947 }, 00:14:03.947 "memory_domains": [ 00:14:03.947 { 00:14:03.947 "dma_device_id": "system", 00:14:03.947 "dma_device_type": 1 00:14:03.947 }, 00:14:03.947 { 00:14:03.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.947 "dma_device_type": 2 00:14:03.947 } 00:14:03.947 ], 00:14:03.947 "driver_specific": {} 00:14:03.947 }' 00:14:03.947 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.947 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:03.947 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:03.947 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:03.947 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:04.205 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:04.205 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:04.205 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:04.205 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:04.205 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:04.205 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:04.205 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:04.205 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:04.464 [2024-07-26 13:13:44.854043] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:04.464 [2024-07-26 13:13:44.854065] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.464 [2024-07-26 13:13:44.854113] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.464 [2024-07-26 13:13:44.854165] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.464 [2024-07-26 13:13:44.854176] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10976f0 name Existed_Raid, state offline 00:14:04.464 13:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 678938 00:14:04.464 13:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 678938 ']' 00:14:04.464 13:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 678938 00:14:04.464 13:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:04.464 13:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:04.464 13:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 678938 00:14:04.464 13:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:04.464 13:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:04.464 13:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 678938' 00:14:04.464 killing process with pid 678938 00:14:04.464 13:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 678938 00:14:04.464 [2024-07-26 13:13:44.927980] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.464 13:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 678938 00:14:04.464 [2024-07-26 13:13:44.951397] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:04.723 00:14:04.723 real 0m26.450s 00:14:04.723 user 0m48.530s 00:14:04.723 sys 0m4.811s 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.723 ************************************ 00:14:04.723 END TEST raid_state_function_test 00:14:04.723 ************************************ 00:14:04.723 13:13:45 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:14:04.723 13:13:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:04.723 13:13:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.723 13:13:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.723 ************************************ 00:14:04.723 START TEST raid_state_function_test_sb 00:14:04.723 ************************************ 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=684034 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 684034' 00:14:04.723 Process raid pid: 684034 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 684034 /var/tmp/spdk-raid.sock 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 684034 ']' 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:04.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.723 13:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.982 [2024-07-26 13:13:45.297931] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:14:04.982 [2024-07-26 13:13:45.297989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.982 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:04.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.982 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:04.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.982 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:04.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.982 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:04.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.982 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:04.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.982 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:04.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.982 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:04.982 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.982 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:04.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.983 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:04.983 [2024-07-26 13:13:45.430798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.241 [2024-07-26 13:13:45.518123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.241 [2024-07-26 13:13:45.576409] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.241 [2024-07-26 13:13:45.576444] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.809 13:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:05.809 13:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:05.809 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:06.069 [2024-07-26 13:13:46.408077] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:06.069 [2024-07-26 13:13:46.408114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:06.069 [2024-07-26 13:13:46.408125] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:06.069 [2024-07-26 13:13:46.408135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:06.069 [2024-07-26 13:13:46.408149] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:06.069 [2024-07-26 13:13:46.408159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:06.069 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:06.069 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:06.069 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:06.069 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:06.069 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:06.069 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:06.069 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:06.069 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:06.069 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:06.069 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:06.069 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.069 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.328 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.328 "name": "Existed_Raid", 00:14:06.328 "uuid": "e696fd79-1d9c-479d-83bf-95336cb28c14", 00:14:06.328 "strip_size_kb": 64, 00:14:06.328 "state": "configuring", 00:14:06.328 "raid_level": "raid0", 00:14:06.328 "superblock": true, 00:14:06.328 "num_base_bdevs": 3, 00:14:06.328 "num_base_bdevs_discovered": 0, 00:14:06.328 "num_base_bdevs_operational": 3, 00:14:06.328 "base_bdevs_list": [ 00:14:06.328 { 00:14:06.328 "name": "BaseBdev1", 00:14:06.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.328 "is_configured": false, 00:14:06.328 "data_offset": 0, 00:14:06.328 "data_size": 0 00:14:06.328 }, 00:14:06.328 { 00:14:06.328 "name": "BaseBdev2", 00:14:06.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.328 "is_configured": false, 00:14:06.328 "data_offset": 0, 00:14:06.328 "data_size": 0 00:14:06.328 }, 00:14:06.328 { 00:14:06.329 "name": "BaseBdev3", 00:14:06.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.329 "is_configured": false, 00:14:06.329 "data_offset": 0, 00:14:06.329 "data_size": 0 00:14:06.329 } 00:14:06.329 ] 00:14:06.329 }' 00:14:06.329 13:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.329 13:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.896 13:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:06.896 [2024-07-26 13:13:47.394738] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:06.896 [2024-07-26 13:13:47.394768] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf27f40 name Existed_Raid, state configuring 00:14:06.896 13:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:07.156 [2024-07-26 13:13:47.619347] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.156 [2024-07-26 13:13:47.619373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.156 [2024-07-26 13:13:47.619382] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.156 [2024-07-26 13:13:47.619392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.156 [2024-07-26 13:13:47.619400] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:07.156 [2024-07-26 13:13:47.619410] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:07.156 13:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:07.415 [2024-07-26 13:13:47.857298] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.415 BaseBdev1 00:14:07.415 13:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:07.415 13:13:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:07.415 13:13:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:07.415 13:13:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:07.415 13:13:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:07.415 13:13:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:07.415 13:13:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:07.673 13:13:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:07.931 [ 00:14:07.931 { 00:14:07.931 "name": "BaseBdev1", 00:14:07.931 "aliases": [ 00:14:07.931 "17e363a6-272f-462d-aa08-582fb98da07a" 00:14:07.931 ], 00:14:07.931 "product_name": "Malloc disk", 00:14:07.931 "block_size": 512, 00:14:07.931 "num_blocks": 65536, 00:14:07.931 "uuid": "17e363a6-272f-462d-aa08-582fb98da07a", 00:14:07.931 "assigned_rate_limits": { 00:14:07.931 "rw_ios_per_sec": 0, 00:14:07.931 "rw_mbytes_per_sec": 0, 00:14:07.931 "r_mbytes_per_sec": 0, 00:14:07.931 "w_mbytes_per_sec": 0 00:14:07.931 }, 00:14:07.931 "claimed": true, 00:14:07.931 "claim_type": "exclusive_write", 00:14:07.931 "zoned": false, 00:14:07.931 "supported_io_types": { 00:14:07.931 "read": true, 00:14:07.931 "write": true, 00:14:07.931 "unmap": true, 00:14:07.931 "flush": true, 00:14:07.931 "reset": true, 00:14:07.931 "nvme_admin": false, 00:14:07.931 "nvme_io": false, 00:14:07.931 "nvme_io_md": false, 00:14:07.931 "write_zeroes": true, 00:14:07.931 "zcopy": true, 00:14:07.931 "get_zone_info": false, 00:14:07.931 "zone_management": false, 00:14:07.931 "zone_append": false, 00:14:07.931 "compare": false, 00:14:07.931 "compare_and_write": false, 00:14:07.931 "abort": true, 00:14:07.931 "seek_hole": false, 00:14:07.931 "seek_data": false, 00:14:07.931 "copy": true, 00:14:07.931 "nvme_iov_md": false 00:14:07.931 }, 00:14:07.931 "memory_domains": [ 00:14:07.931 { 00:14:07.931 "dma_device_id": "system", 00:14:07.931 "dma_device_type": 1 00:14:07.931 }, 00:14:07.931 { 00:14:07.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.931 "dma_device_type": 2 00:14:07.931 } 00:14:07.931 ], 00:14:07.931 "driver_specific": {} 00:14:07.931 } 00:14:07.931 ] 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.931 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.190 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:08.190 "name": "Existed_Raid", 00:14:08.190 "uuid": "57574b3c-6990-4613-9b0d-7c6715a7a57d", 00:14:08.190 "strip_size_kb": 64, 00:14:08.190 "state": "configuring", 00:14:08.190 "raid_level": "raid0", 00:14:08.190 "superblock": true, 00:14:08.190 "num_base_bdevs": 3, 00:14:08.190 "num_base_bdevs_discovered": 1, 00:14:08.190 "num_base_bdevs_operational": 3, 00:14:08.190 "base_bdevs_list": [ 00:14:08.190 { 00:14:08.190 "name": "BaseBdev1", 00:14:08.190 "uuid": "17e363a6-272f-462d-aa08-582fb98da07a", 00:14:08.190 "is_configured": true, 00:14:08.190 "data_offset": 2048, 00:14:08.190 "data_size": 63488 00:14:08.190 }, 00:14:08.190 { 00:14:08.190 "name": "BaseBdev2", 00:14:08.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.190 "is_configured": false, 00:14:08.190 "data_offset": 0, 00:14:08.190 "data_size": 0 00:14:08.190 }, 00:14:08.190 { 00:14:08.190 "name": "BaseBdev3", 00:14:08.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.190 "is_configured": false, 00:14:08.190 "data_offset": 0, 00:14:08.190 "data_size": 0 00:14:08.190 } 00:14:08.190 ] 00:14:08.190 }' 00:14:08.190 13:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:08.190 13:13:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.756 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:09.014 [2024-07-26 13:13:49.337194] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.014 [2024-07-26 13:13:49.337229] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf27810 name Existed_Raid, state configuring 00:14:09.014 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:09.273 [2024-07-26 13:13:49.561821] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.273 [2024-07-26 13:13:49.563214] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.273 [2024-07-26 13:13:49.563246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.273 [2024-07-26 13:13:49.563255] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.273 [2024-07-26 13:13:49.563266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.273 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.531 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.531 "name": "Existed_Raid", 00:14:09.531 "uuid": "7d2313fd-6f94-427e-bfbd-2bec486ea9c1", 00:14:09.531 "strip_size_kb": 64, 00:14:09.531 "state": "configuring", 00:14:09.531 "raid_level": "raid0", 00:14:09.531 "superblock": true, 00:14:09.531 "num_base_bdevs": 3, 00:14:09.531 "num_base_bdevs_discovered": 1, 00:14:09.531 "num_base_bdevs_operational": 3, 00:14:09.531 "base_bdevs_list": [ 00:14:09.531 { 00:14:09.531 "name": "BaseBdev1", 00:14:09.531 "uuid": "17e363a6-272f-462d-aa08-582fb98da07a", 00:14:09.531 "is_configured": true, 00:14:09.531 "data_offset": 2048, 00:14:09.531 "data_size": 63488 00:14:09.531 }, 00:14:09.531 { 00:14:09.531 "name": "BaseBdev2", 00:14:09.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.531 "is_configured": false, 00:14:09.531 "data_offset": 0, 00:14:09.531 "data_size": 0 00:14:09.531 }, 00:14:09.531 { 00:14:09.531 "name": "BaseBdev3", 00:14:09.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.531 "is_configured": false, 00:14:09.531 "data_offset": 0, 00:14:09.531 "data_size": 0 00:14:09.531 } 00:14:09.531 ] 00:14:09.531 }' 00:14:09.531 13:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.531 13:13:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.098 13:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:10.098 [2024-07-26 13:13:50.591844] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.098 BaseBdev2 00:14:10.098 13:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:10.098 13:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:10.098 13:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:10.098 13:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:10.098 13:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:10.098 13:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:10.098 13:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:10.356 13:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:10.615 [ 00:14:10.615 { 00:14:10.615 "name": "BaseBdev2", 00:14:10.615 "aliases": [ 00:14:10.615 "4ff4d3c9-6aad-464f-93b7-5310350d69b3" 00:14:10.615 ], 00:14:10.615 "product_name": "Malloc disk", 00:14:10.615 "block_size": 512, 00:14:10.615 "num_blocks": 65536, 00:14:10.615 "uuid": "4ff4d3c9-6aad-464f-93b7-5310350d69b3", 00:14:10.615 "assigned_rate_limits": { 00:14:10.615 "rw_ios_per_sec": 0, 00:14:10.615 "rw_mbytes_per_sec": 0, 00:14:10.615 "r_mbytes_per_sec": 0, 00:14:10.615 "w_mbytes_per_sec": 0 00:14:10.615 }, 00:14:10.615 "claimed": true, 00:14:10.615 "claim_type": "exclusive_write", 00:14:10.615 "zoned": false, 00:14:10.615 "supported_io_types": { 00:14:10.615 "read": true, 00:14:10.615 "write": true, 00:14:10.615 "unmap": true, 00:14:10.615 "flush": true, 00:14:10.615 "reset": true, 00:14:10.615 "nvme_admin": false, 00:14:10.615 "nvme_io": false, 00:14:10.615 "nvme_io_md": false, 00:14:10.615 "write_zeroes": true, 00:14:10.615 "zcopy": true, 00:14:10.615 "get_zone_info": false, 00:14:10.615 "zone_management": false, 00:14:10.615 "zone_append": false, 00:14:10.615 "compare": false, 00:14:10.615 "compare_and_write": false, 00:14:10.615 "abort": true, 00:14:10.615 "seek_hole": false, 00:14:10.615 "seek_data": false, 00:14:10.615 "copy": true, 00:14:10.615 "nvme_iov_md": false 00:14:10.615 }, 00:14:10.615 "memory_domains": [ 00:14:10.615 { 00:14:10.615 "dma_device_id": "system", 00:14:10.615 "dma_device_type": 1 00:14:10.615 }, 00:14:10.615 { 00:14:10.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.615 "dma_device_type": 2 00:14:10.615 } 00:14:10.615 ], 00:14:10.615 "driver_specific": {} 00:14:10.615 } 00:14:10.615 ] 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.615 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.616 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.616 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.874 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:10.874 "name": "Existed_Raid", 00:14:10.874 "uuid": "7d2313fd-6f94-427e-bfbd-2bec486ea9c1", 00:14:10.874 "strip_size_kb": 64, 00:14:10.874 "state": "configuring", 00:14:10.874 "raid_level": "raid0", 00:14:10.874 "superblock": true, 00:14:10.874 "num_base_bdevs": 3, 00:14:10.874 "num_base_bdevs_discovered": 2, 00:14:10.874 "num_base_bdevs_operational": 3, 00:14:10.874 "base_bdevs_list": [ 00:14:10.874 { 00:14:10.874 "name": "BaseBdev1", 00:14:10.874 "uuid": "17e363a6-272f-462d-aa08-582fb98da07a", 00:14:10.874 "is_configured": true, 00:14:10.874 "data_offset": 2048, 00:14:10.874 "data_size": 63488 00:14:10.874 }, 00:14:10.874 { 00:14:10.874 "name": "BaseBdev2", 00:14:10.874 "uuid": "4ff4d3c9-6aad-464f-93b7-5310350d69b3", 00:14:10.874 "is_configured": true, 00:14:10.874 "data_offset": 2048, 00:14:10.874 "data_size": 63488 00:14:10.874 }, 00:14:10.874 { 00:14:10.874 "name": "BaseBdev3", 00:14:10.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.874 "is_configured": false, 00:14:10.874 "data_offset": 0, 00:14:10.874 "data_size": 0 00:14:10.874 } 00:14:10.874 ] 00:14:10.874 }' 00:14:10.874 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:10.874 13:13:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.441 13:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:11.699 [2024-07-26 13:13:52.066842] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.699 [2024-07-26 13:13:52.066983] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xf28710 00:14:11.699 [2024-07-26 13:13:52.066996] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:11.699 [2024-07-26 13:13:52.067165] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf283e0 00:14:11.699 [2024-07-26 13:13:52.067275] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf28710 00:14:11.699 [2024-07-26 13:13:52.067284] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf28710 00:14:11.699 [2024-07-26 13:13:52.067370] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.700 BaseBdev3 00:14:11.700 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:11.700 13:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:11.700 13:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:11.700 13:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:11.700 13:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:11.700 13:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:11.700 13:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:11.958 13:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:12.216 [ 00:14:12.216 { 00:14:12.216 "name": "BaseBdev3", 00:14:12.216 "aliases": [ 00:14:12.216 "5ee71dbd-4c60-40cd-a465-8580894d2308" 00:14:12.216 ], 00:14:12.216 "product_name": "Malloc disk", 00:14:12.216 "block_size": 512, 00:14:12.216 "num_blocks": 65536, 00:14:12.216 "uuid": "5ee71dbd-4c60-40cd-a465-8580894d2308", 00:14:12.216 "assigned_rate_limits": { 00:14:12.216 "rw_ios_per_sec": 0, 00:14:12.216 "rw_mbytes_per_sec": 0, 00:14:12.216 "r_mbytes_per_sec": 0, 00:14:12.216 "w_mbytes_per_sec": 0 00:14:12.216 }, 00:14:12.216 "claimed": true, 00:14:12.216 "claim_type": "exclusive_write", 00:14:12.216 "zoned": false, 00:14:12.216 "supported_io_types": { 00:14:12.216 "read": true, 00:14:12.216 "write": true, 00:14:12.216 "unmap": true, 00:14:12.216 "flush": true, 00:14:12.216 "reset": true, 00:14:12.216 "nvme_admin": false, 00:14:12.216 "nvme_io": false, 00:14:12.216 "nvme_io_md": false, 00:14:12.216 "write_zeroes": true, 00:14:12.216 "zcopy": true, 00:14:12.216 "get_zone_info": false, 00:14:12.216 "zone_management": false, 00:14:12.216 "zone_append": false, 00:14:12.216 "compare": false, 00:14:12.216 "compare_and_write": false, 00:14:12.216 "abort": true, 00:14:12.216 "seek_hole": false, 00:14:12.216 "seek_data": false, 00:14:12.216 "copy": true, 00:14:12.216 "nvme_iov_md": false 00:14:12.216 }, 00:14:12.216 "memory_domains": [ 00:14:12.216 { 00:14:12.216 "dma_device_id": "system", 00:14:12.216 "dma_device_type": 1 00:14:12.216 }, 00:14:12.216 { 00:14:12.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.216 "dma_device_type": 2 00:14:12.216 } 00:14:12.216 ], 00:14:12.216 "driver_specific": {} 00:14:12.216 } 00:14:12.216 ] 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.216 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.217 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.474 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:12.474 "name": "Existed_Raid", 00:14:12.475 "uuid": "7d2313fd-6f94-427e-bfbd-2bec486ea9c1", 00:14:12.475 "strip_size_kb": 64, 00:14:12.475 "state": "online", 00:14:12.475 "raid_level": "raid0", 00:14:12.475 "superblock": true, 00:14:12.475 "num_base_bdevs": 3, 00:14:12.475 "num_base_bdevs_discovered": 3, 00:14:12.475 "num_base_bdevs_operational": 3, 00:14:12.475 "base_bdevs_list": [ 00:14:12.475 { 00:14:12.475 "name": "BaseBdev1", 00:14:12.475 "uuid": "17e363a6-272f-462d-aa08-582fb98da07a", 00:14:12.475 "is_configured": true, 00:14:12.475 "data_offset": 2048, 00:14:12.475 "data_size": 63488 00:14:12.475 }, 00:14:12.475 { 00:14:12.475 "name": "BaseBdev2", 00:14:12.475 "uuid": "4ff4d3c9-6aad-464f-93b7-5310350d69b3", 00:14:12.475 "is_configured": true, 00:14:12.475 "data_offset": 2048, 00:14:12.475 "data_size": 63488 00:14:12.475 }, 00:14:12.475 { 00:14:12.475 "name": "BaseBdev3", 00:14:12.475 "uuid": "5ee71dbd-4c60-40cd-a465-8580894d2308", 00:14:12.475 "is_configured": true, 00:14:12.475 "data_offset": 2048, 00:14:12.475 "data_size": 63488 00:14:12.475 } 00:14:12.475 ] 00:14:12.475 }' 00:14:12.475 13:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:12.475 13:13:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.041 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:13.041 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:13.041 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:13.041 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:13.041 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:13.041 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:13.041 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:13.041 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:13.041 [2024-07-26 13:13:53.539007] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.041 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:13.041 "name": "Existed_Raid", 00:14:13.041 "aliases": [ 00:14:13.041 "7d2313fd-6f94-427e-bfbd-2bec486ea9c1" 00:14:13.041 ], 00:14:13.041 "product_name": "Raid Volume", 00:14:13.041 "block_size": 512, 00:14:13.041 "num_blocks": 190464, 00:14:13.041 "uuid": "7d2313fd-6f94-427e-bfbd-2bec486ea9c1", 00:14:13.041 "assigned_rate_limits": { 00:14:13.041 "rw_ios_per_sec": 0, 00:14:13.041 "rw_mbytes_per_sec": 0, 00:14:13.041 "r_mbytes_per_sec": 0, 00:14:13.041 "w_mbytes_per_sec": 0 00:14:13.041 }, 00:14:13.041 "claimed": false, 00:14:13.041 "zoned": false, 00:14:13.041 "supported_io_types": { 00:14:13.041 "read": true, 00:14:13.041 "write": true, 00:14:13.041 "unmap": true, 00:14:13.041 "flush": true, 00:14:13.041 "reset": true, 00:14:13.041 "nvme_admin": false, 00:14:13.041 "nvme_io": false, 00:14:13.041 "nvme_io_md": false, 00:14:13.041 "write_zeroes": true, 00:14:13.041 "zcopy": false, 00:14:13.041 "get_zone_info": false, 00:14:13.041 "zone_management": false, 00:14:13.041 "zone_append": false, 00:14:13.041 "compare": false, 00:14:13.041 "compare_and_write": false, 00:14:13.041 "abort": false, 00:14:13.041 "seek_hole": false, 00:14:13.041 "seek_data": false, 00:14:13.041 "copy": false, 00:14:13.041 "nvme_iov_md": false 00:14:13.041 }, 00:14:13.041 "memory_domains": [ 00:14:13.041 { 00:14:13.041 "dma_device_id": "system", 00:14:13.041 "dma_device_type": 1 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.041 "dma_device_type": 2 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "dma_device_id": "system", 00:14:13.041 "dma_device_type": 1 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.041 "dma_device_type": 2 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "dma_device_id": "system", 00:14:13.041 "dma_device_type": 1 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.041 "dma_device_type": 2 00:14:13.041 } 00:14:13.041 ], 00:14:13.041 "driver_specific": { 00:14:13.041 "raid": { 00:14:13.041 "uuid": "7d2313fd-6f94-427e-bfbd-2bec486ea9c1", 00:14:13.041 "strip_size_kb": 64, 00:14:13.041 "state": "online", 00:14:13.041 "raid_level": "raid0", 00:14:13.041 "superblock": true, 00:14:13.041 "num_base_bdevs": 3, 00:14:13.041 "num_base_bdevs_discovered": 3, 00:14:13.041 "num_base_bdevs_operational": 3, 00:14:13.041 "base_bdevs_list": [ 00:14:13.041 { 00:14:13.041 "name": "BaseBdev1", 00:14:13.041 "uuid": "17e363a6-272f-462d-aa08-582fb98da07a", 00:14:13.041 "is_configured": true, 00:14:13.041 "data_offset": 2048, 00:14:13.041 "data_size": 63488 00:14:13.041 }, 00:14:13.041 { 00:14:13.041 "name": "BaseBdev2", 00:14:13.041 "uuid": "4ff4d3c9-6aad-464f-93b7-5310350d69b3", 00:14:13.041 "is_configured": true, 00:14:13.041 "data_offset": 2048, 00:14:13.042 "data_size": 63488 00:14:13.042 }, 00:14:13.042 { 00:14:13.042 "name": "BaseBdev3", 00:14:13.042 "uuid": "5ee71dbd-4c60-40cd-a465-8580894d2308", 00:14:13.042 "is_configured": true, 00:14:13.042 "data_offset": 2048, 00:14:13.042 "data_size": 63488 00:14:13.042 } 00:14:13.042 ] 00:14:13.042 } 00:14:13.042 } 00:14:13.042 }' 00:14:13.042 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:13.301 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:13.301 BaseBdev2 00:14:13.301 BaseBdev3' 00:14:13.301 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:13.301 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:13.301 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:13.559 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:13.559 "name": "BaseBdev1", 00:14:13.559 "aliases": [ 00:14:13.559 "17e363a6-272f-462d-aa08-582fb98da07a" 00:14:13.559 ], 00:14:13.559 "product_name": "Malloc disk", 00:14:13.559 "block_size": 512, 00:14:13.559 "num_blocks": 65536, 00:14:13.559 "uuid": "17e363a6-272f-462d-aa08-582fb98da07a", 00:14:13.559 "assigned_rate_limits": { 00:14:13.559 "rw_ios_per_sec": 0, 00:14:13.559 "rw_mbytes_per_sec": 0, 00:14:13.559 "r_mbytes_per_sec": 0, 00:14:13.559 "w_mbytes_per_sec": 0 00:14:13.559 }, 00:14:13.559 "claimed": true, 00:14:13.559 "claim_type": "exclusive_write", 00:14:13.559 "zoned": false, 00:14:13.559 "supported_io_types": { 00:14:13.559 "read": true, 00:14:13.559 "write": true, 00:14:13.559 "unmap": true, 00:14:13.559 "flush": true, 00:14:13.560 "reset": true, 00:14:13.560 "nvme_admin": false, 00:14:13.560 "nvme_io": false, 00:14:13.560 "nvme_io_md": false, 00:14:13.560 "write_zeroes": true, 00:14:13.560 "zcopy": true, 00:14:13.560 "get_zone_info": false, 00:14:13.560 "zone_management": false, 00:14:13.560 "zone_append": false, 00:14:13.560 "compare": false, 00:14:13.560 "compare_and_write": false, 00:14:13.560 "abort": true, 00:14:13.560 "seek_hole": false, 00:14:13.560 "seek_data": false, 00:14:13.560 "copy": true, 00:14:13.560 "nvme_iov_md": false 00:14:13.560 }, 00:14:13.560 "memory_domains": [ 00:14:13.560 { 00:14:13.560 "dma_device_id": "system", 00:14:13.560 "dma_device_type": 1 00:14:13.560 }, 00:14:13.560 { 00:14:13.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.560 "dma_device_type": 2 00:14:13.560 } 00:14:13.560 ], 00:14:13.560 "driver_specific": {} 00:14:13.560 }' 00:14:13.560 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:13.560 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:13.560 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:13.560 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:13.560 13:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:13.560 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:13.560 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:13.560 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:13.560 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:13.560 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:13.842 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:13.842 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:13.842 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:13.842 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:13.842 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:14.109 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:14.109 "name": "BaseBdev2", 00:14:14.109 "aliases": [ 00:14:14.109 "4ff4d3c9-6aad-464f-93b7-5310350d69b3" 00:14:14.109 ], 00:14:14.109 "product_name": "Malloc disk", 00:14:14.109 "block_size": 512, 00:14:14.109 "num_blocks": 65536, 00:14:14.109 "uuid": "4ff4d3c9-6aad-464f-93b7-5310350d69b3", 00:14:14.109 "assigned_rate_limits": { 00:14:14.109 "rw_ios_per_sec": 0, 00:14:14.109 "rw_mbytes_per_sec": 0, 00:14:14.109 "r_mbytes_per_sec": 0, 00:14:14.109 "w_mbytes_per_sec": 0 00:14:14.109 }, 00:14:14.109 "claimed": true, 00:14:14.109 "claim_type": "exclusive_write", 00:14:14.109 "zoned": false, 00:14:14.109 "supported_io_types": { 00:14:14.109 "read": true, 00:14:14.109 "write": true, 00:14:14.109 "unmap": true, 00:14:14.109 "flush": true, 00:14:14.109 "reset": true, 00:14:14.109 "nvme_admin": false, 00:14:14.109 "nvme_io": false, 00:14:14.109 "nvme_io_md": false, 00:14:14.109 "write_zeroes": true, 00:14:14.109 "zcopy": true, 00:14:14.109 "get_zone_info": false, 00:14:14.109 "zone_management": false, 00:14:14.109 "zone_append": false, 00:14:14.109 "compare": false, 00:14:14.109 "compare_and_write": false, 00:14:14.109 "abort": true, 00:14:14.109 "seek_hole": false, 00:14:14.109 "seek_data": false, 00:14:14.109 "copy": true, 00:14:14.109 "nvme_iov_md": false 00:14:14.109 }, 00:14:14.109 "memory_domains": [ 00:14:14.109 { 00:14:14.109 "dma_device_id": "system", 00:14:14.109 "dma_device_type": 1 00:14:14.109 }, 00:14:14.109 { 00:14:14.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.109 "dma_device_type": 2 00:14:14.109 } 00:14:14.109 ], 00:14:14.109 "driver_specific": {} 00:14:14.109 }' 00:14:14.109 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.109 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.109 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:14.109 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.109 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.109 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:14.109 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.109 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.368 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:14.368 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.368 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.368 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:14.368 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:14.368 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:14.368 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:14.627 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:14.627 "name": "BaseBdev3", 00:14:14.627 "aliases": [ 00:14:14.627 "5ee71dbd-4c60-40cd-a465-8580894d2308" 00:14:14.627 ], 00:14:14.627 "product_name": "Malloc disk", 00:14:14.627 "block_size": 512, 00:14:14.627 "num_blocks": 65536, 00:14:14.627 "uuid": "5ee71dbd-4c60-40cd-a465-8580894d2308", 00:14:14.627 "assigned_rate_limits": { 00:14:14.627 "rw_ios_per_sec": 0, 00:14:14.627 "rw_mbytes_per_sec": 0, 00:14:14.627 "r_mbytes_per_sec": 0, 00:14:14.627 "w_mbytes_per_sec": 0 00:14:14.627 }, 00:14:14.627 "claimed": true, 00:14:14.627 "claim_type": "exclusive_write", 00:14:14.627 "zoned": false, 00:14:14.627 "supported_io_types": { 00:14:14.627 "read": true, 00:14:14.627 "write": true, 00:14:14.627 "unmap": true, 00:14:14.627 "flush": true, 00:14:14.627 "reset": true, 00:14:14.628 "nvme_admin": false, 00:14:14.628 "nvme_io": false, 00:14:14.628 "nvme_io_md": false, 00:14:14.628 "write_zeroes": true, 00:14:14.628 "zcopy": true, 00:14:14.628 "get_zone_info": false, 00:14:14.628 "zone_management": false, 00:14:14.628 "zone_append": false, 00:14:14.628 "compare": false, 00:14:14.628 "compare_and_write": false, 00:14:14.628 "abort": true, 00:14:14.628 "seek_hole": false, 00:14:14.628 "seek_data": false, 00:14:14.628 "copy": true, 00:14:14.628 "nvme_iov_md": false 00:14:14.628 }, 00:14:14.628 "memory_domains": [ 00:14:14.628 { 00:14:14.628 "dma_device_id": "system", 00:14:14.628 "dma_device_type": 1 00:14:14.628 }, 00:14:14.628 { 00:14:14.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.628 "dma_device_type": 2 00:14:14.628 } 00:14:14.628 ], 00:14:14.628 "driver_specific": {} 00:14:14.628 }' 00:14:14.628 13:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.628 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:14.628 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:14.628 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.628 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:14.887 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:14.887 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.887 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:14.887 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:14.887 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.887 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:14.887 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:14.887 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:15.146 [2024-07-26 13:13:55.463870] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:15.146 [2024-07-26 13:13:55.463894] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.146 [2024-07-26 13:13:55.463931] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.146 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.405 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.405 "name": "Existed_Raid", 00:14:15.405 "uuid": "7d2313fd-6f94-427e-bfbd-2bec486ea9c1", 00:14:15.405 "strip_size_kb": 64, 00:14:15.405 "state": "offline", 00:14:15.405 "raid_level": "raid0", 00:14:15.405 "superblock": true, 00:14:15.405 "num_base_bdevs": 3, 00:14:15.405 "num_base_bdevs_discovered": 2, 00:14:15.405 "num_base_bdevs_operational": 2, 00:14:15.405 "base_bdevs_list": [ 00:14:15.405 { 00:14:15.405 "name": null, 00:14:15.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.405 "is_configured": false, 00:14:15.405 "data_offset": 2048, 00:14:15.405 "data_size": 63488 00:14:15.405 }, 00:14:15.405 { 00:14:15.405 "name": "BaseBdev2", 00:14:15.405 "uuid": "4ff4d3c9-6aad-464f-93b7-5310350d69b3", 00:14:15.405 "is_configured": true, 00:14:15.405 "data_offset": 2048, 00:14:15.405 "data_size": 63488 00:14:15.405 }, 00:14:15.405 { 00:14:15.405 "name": "BaseBdev3", 00:14:15.405 "uuid": "5ee71dbd-4c60-40cd-a465-8580894d2308", 00:14:15.405 "is_configured": true, 00:14:15.405 "data_offset": 2048, 00:14:15.405 "data_size": 63488 00:14:15.405 } 00:14:15.405 ] 00:14:15.405 }' 00:14:15.405 13:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.405 13:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.973 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:15.973 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:15.973 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:15.973 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.973 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:15.973 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:15.973 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:16.232 [2024-07-26 13:13:56.672066] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:16.232 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:16.232 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:16.232 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.232 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:16.491 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:16.491 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:16.491 13:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:16.749 [2024-07-26 13:13:57.127436] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:16.749 [2024-07-26 13:13:57.127481] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf28710 name Existed_Raid, state offline 00:14:16.749 13:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:16.750 13:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:16.750 13:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.750 13:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:17.008 13:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:17.008 13:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:17.008 13:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:14:17.008 13:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:17.008 13:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:17.008 13:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:17.267 BaseBdev2 00:14:17.267 13:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:17.267 13:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:17.267 13:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:17.267 13:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:17.267 13:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:17.267 13:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:17.267 13:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:17.836 13:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:17.836 [ 00:14:17.836 { 00:14:17.836 "name": "BaseBdev2", 00:14:17.836 "aliases": [ 00:14:17.836 "810a58ff-353d-462a-ab57-efcaa25c12fe" 00:14:17.836 ], 00:14:17.836 "product_name": "Malloc disk", 00:14:17.836 "block_size": 512, 00:14:17.836 "num_blocks": 65536, 00:14:17.836 "uuid": "810a58ff-353d-462a-ab57-efcaa25c12fe", 00:14:17.836 "assigned_rate_limits": { 00:14:17.836 "rw_ios_per_sec": 0, 00:14:17.836 "rw_mbytes_per_sec": 0, 00:14:17.836 "r_mbytes_per_sec": 0, 00:14:17.836 "w_mbytes_per_sec": 0 00:14:17.836 }, 00:14:17.836 "claimed": false, 00:14:17.836 "zoned": false, 00:14:17.836 "supported_io_types": { 00:14:17.836 "read": true, 00:14:17.836 "write": true, 00:14:17.836 "unmap": true, 00:14:17.836 "flush": true, 00:14:17.836 "reset": true, 00:14:17.836 "nvme_admin": false, 00:14:17.836 "nvme_io": false, 00:14:17.836 "nvme_io_md": false, 00:14:17.836 "write_zeroes": true, 00:14:17.836 "zcopy": true, 00:14:17.836 "get_zone_info": false, 00:14:17.836 "zone_management": false, 00:14:17.836 "zone_append": false, 00:14:17.836 "compare": false, 00:14:17.836 "compare_and_write": false, 00:14:17.836 "abort": true, 00:14:17.836 "seek_hole": false, 00:14:17.836 "seek_data": false, 00:14:17.836 "copy": true, 00:14:17.836 "nvme_iov_md": false 00:14:17.836 }, 00:14:17.836 "memory_domains": [ 00:14:17.836 { 00:14:17.836 "dma_device_id": "system", 00:14:17.836 "dma_device_type": 1 00:14:17.836 }, 00:14:17.836 { 00:14:17.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.836 "dma_device_type": 2 00:14:17.836 } 00:14:17.836 ], 00:14:17.836 "driver_specific": {} 00:14:17.836 } 00:14:17.836 ] 00:14:17.836 13:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:17.836 13:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:17.836 13:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:17.836 13:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:18.405 BaseBdev3 00:14:18.405 13:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:18.405 13:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:18.405 13:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:18.405 13:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:18.405 13:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:18.405 13:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:18.405 13:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:18.664 13:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:19.233 [ 00:14:19.233 { 00:14:19.233 "name": "BaseBdev3", 00:14:19.233 "aliases": [ 00:14:19.233 "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf" 00:14:19.233 ], 00:14:19.233 "product_name": "Malloc disk", 00:14:19.233 "block_size": 512, 00:14:19.233 "num_blocks": 65536, 00:14:19.233 "uuid": "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf", 00:14:19.233 "assigned_rate_limits": { 00:14:19.233 "rw_ios_per_sec": 0, 00:14:19.233 "rw_mbytes_per_sec": 0, 00:14:19.233 "r_mbytes_per_sec": 0, 00:14:19.233 "w_mbytes_per_sec": 0 00:14:19.233 }, 00:14:19.233 "claimed": false, 00:14:19.233 "zoned": false, 00:14:19.233 "supported_io_types": { 00:14:19.233 "read": true, 00:14:19.233 "write": true, 00:14:19.233 "unmap": true, 00:14:19.233 "flush": true, 00:14:19.233 "reset": true, 00:14:19.233 "nvme_admin": false, 00:14:19.233 "nvme_io": false, 00:14:19.233 "nvme_io_md": false, 00:14:19.233 "write_zeroes": true, 00:14:19.233 "zcopy": true, 00:14:19.233 "get_zone_info": false, 00:14:19.233 "zone_management": false, 00:14:19.233 "zone_append": false, 00:14:19.233 "compare": false, 00:14:19.233 "compare_and_write": false, 00:14:19.233 "abort": true, 00:14:19.233 "seek_hole": false, 00:14:19.233 "seek_data": false, 00:14:19.233 "copy": true, 00:14:19.233 "nvme_iov_md": false 00:14:19.233 }, 00:14:19.233 "memory_domains": [ 00:14:19.233 { 00:14:19.233 "dma_device_id": "system", 00:14:19.233 "dma_device_type": 1 00:14:19.233 }, 00:14:19.233 { 00:14:19.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.233 "dma_device_type": 2 00:14:19.233 } 00:14:19.233 ], 00:14:19.233 "driver_specific": {} 00:14:19.233 } 00:14:19.233 ] 00:14:19.233 13:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:19.233 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:19.233 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:19.233 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:14:19.492 [2024-07-26 13:13:59.796874] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.492 [2024-07-26 13:13:59.796909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.492 [2024-07-26 13:13:59.796927] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.492 [2024-07-26 13:13:59.798156] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.492 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:19.492 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:19.492 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:19.492 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:19.492 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:19.492 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:19.492 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:19.492 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:19.492 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:19.492 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:19.492 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.492 13:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.752 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:19.752 "name": "Existed_Raid", 00:14:19.752 "uuid": "90a31ed1-d7bb-4050-b84d-a2f39948c203", 00:14:19.752 "strip_size_kb": 64, 00:14:19.752 "state": "configuring", 00:14:19.752 "raid_level": "raid0", 00:14:19.752 "superblock": true, 00:14:19.752 "num_base_bdevs": 3, 00:14:19.752 "num_base_bdevs_discovered": 2, 00:14:19.752 "num_base_bdevs_operational": 3, 00:14:19.752 "base_bdevs_list": [ 00:14:19.752 { 00:14:19.752 "name": "BaseBdev1", 00:14:19.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.752 "is_configured": false, 00:14:19.752 "data_offset": 0, 00:14:19.752 "data_size": 0 00:14:19.752 }, 00:14:19.752 { 00:14:19.752 "name": "BaseBdev2", 00:14:19.752 "uuid": "810a58ff-353d-462a-ab57-efcaa25c12fe", 00:14:19.752 "is_configured": true, 00:14:19.752 "data_offset": 2048, 00:14:19.752 "data_size": 63488 00:14:19.752 }, 00:14:19.752 { 00:14:19.752 "name": "BaseBdev3", 00:14:19.752 "uuid": "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf", 00:14:19.752 "is_configured": true, 00:14:19.752 "data_offset": 2048, 00:14:19.752 "data_size": 63488 00:14:19.752 } 00:14:19.752 ] 00:14:19.752 }' 00:14:19.752 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:19.752 13:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.320 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:20.320 [2024-07-26 13:14:00.823563] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:20.320 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:20.320 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:20.320 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:20.320 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:20.320 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:20.320 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:20.320 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:20.579 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:20.579 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:20.579 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:20.579 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.579 13:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.579 13:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:20.579 "name": "Existed_Raid", 00:14:20.579 "uuid": "90a31ed1-d7bb-4050-b84d-a2f39948c203", 00:14:20.579 "strip_size_kb": 64, 00:14:20.579 "state": "configuring", 00:14:20.579 "raid_level": "raid0", 00:14:20.579 "superblock": true, 00:14:20.579 "num_base_bdevs": 3, 00:14:20.579 "num_base_bdevs_discovered": 1, 00:14:20.579 "num_base_bdevs_operational": 3, 00:14:20.579 "base_bdevs_list": [ 00:14:20.579 { 00:14:20.579 "name": "BaseBdev1", 00:14:20.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.579 "is_configured": false, 00:14:20.579 "data_offset": 0, 00:14:20.579 "data_size": 0 00:14:20.579 }, 00:14:20.579 { 00:14:20.579 "name": null, 00:14:20.579 "uuid": "810a58ff-353d-462a-ab57-efcaa25c12fe", 00:14:20.579 "is_configured": false, 00:14:20.579 "data_offset": 2048, 00:14:20.579 "data_size": 63488 00:14:20.579 }, 00:14:20.579 { 00:14:20.579 "name": "BaseBdev3", 00:14:20.579 "uuid": "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf", 00:14:20.579 "is_configured": true, 00:14:20.579 "data_offset": 2048, 00:14:20.579 "data_size": 63488 00:14:20.579 } 00:14:20.579 ] 00:14:20.579 }' 00:14:20.579 13:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:20.579 13:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.147 13:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.147 13:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:21.406 13:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:21.406 13:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:21.666 [2024-07-26 13:14:02.049884] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.666 BaseBdev1 00:14:21.666 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:21.666 13:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:21.666 13:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:21.666 13:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:21.666 13:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:21.666 13:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:21.666 13:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:21.925 13:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.184 [ 00:14:22.184 { 00:14:22.184 "name": "BaseBdev1", 00:14:22.184 "aliases": [ 00:14:22.184 "e1b28569-7c16-43fd-800f-d74416a62fee" 00:14:22.184 ], 00:14:22.184 "product_name": "Malloc disk", 00:14:22.184 "block_size": 512, 00:14:22.184 "num_blocks": 65536, 00:14:22.184 "uuid": "e1b28569-7c16-43fd-800f-d74416a62fee", 00:14:22.184 "assigned_rate_limits": { 00:14:22.184 "rw_ios_per_sec": 0, 00:14:22.184 "rw_mbytes_per_sec": 0, 00:14:22.184 "r_mbytes_per_sec": 0, 00:14:22.185 "w_mbytes_per_sec": 0 00:14:22.185 }, 00:14:22.185 "claimed": true, 00:14:22.185 "claim_type": "exclusive_write", 00:14:22.185 "zoned": false, 00:14:22.185 "supported_io_types": { 00:14:22.185 "read": true, 00:14:22.185 "write": true, 00:14:22.185 "unmap": true, 00:14:22.185 "flush": true, 00:14:22.185 "reset": true, 00:14:22.185 "nvme_admin": false, 00:14:22.185 "nvme_io": false, 00:14:22.185 "nvme_io_md": false, 00:14:22.185 "write_zeroes": true, 00:14:22.185 "zcopy": true, 00:14:22.185 "get_zone_info": false, 00:14:22.185 "zone_management": false, 00:14:22.185 "zone_append": false, 00:14:22.185 "compare": false, 00:14:22.185 "compare_and_write": false, 00:14:22.185 "abort": true, 00:14:22.185 "seek_hole": false, 00:14:22.185 "seek_data": false, 00:14:22.185 "copy": true, 00:14:22.185 "nvme_iov_md": false 00:14:22.185 }, 00:14:22.185 "memory_domains": [ 00:14:22.185 { 00:14:22.185 "dma_device_id": "system", 00:14:22.185 "dma_device_type": 1 00:14:22.185 }, 00:14:22.185 { 00:14:22.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.185 "dma_device_type": 2 00:14:22.185 } 00:14:22.185 ], 00:14:22.185 "driver_specific": {} 00:14:22.185 } 00:14:22.185 ] 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.185 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.444 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.444 "name": "Existed_Raid", 00:14:22.444 "uuid": "90a31ed1-d7bb-4050-b84d-a2f39948c203", 00:14:22.444 "strip_size_kb": 64, 00:14:22.444 "state": "configuring", 00:14:22.444 "raid_level": "raid0", 00:14:22.444 "superblock": true, 00:14:22.444 "num_base_bdevs": 3, 00:14:22.444 "num_base_bdevs_discovered": 2, 00:14:22.444 "num_base_bdevs_operational": 3, 00:14:22.444 "base_bdevs_list": [ 00:14:22.444 { 00:14:22.444 "name": "BaseBdev1", 00:14:22.444 "uuid": "e1b28569-7c16-43fd-800f-d74416a62fee", 00:14:22.444 "is_configured": true, 00:14:22.444 "data_offset": 2048, 00:14:22.444 "data_size": 63488 00:14:22.444 }, 00:14:22.444 { 00:14:22.444 "name": null, 00:14:22.444 "uuid": "810a58ff-353d-462a-ab57-efcaa25c12fe", 00:14:22.444 "is_configured": false, 00:14:22.444 "data_offset": 2048, 00:14:22.444 "data_size": 63488 00:14:22.444 }, 00:14:22.444 { 00:14:22.444 "name": "BaseBdev3", 00:14:22.444 "uuid": "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf", 00:14:22.444 "is_configured": true, 00:14:22.444 "data_offset": 2048, 00:14:22.444 "data_size": 63488 00:14:22.444 } 00:14:22.444 ] 00:14:22.444 }' 00:14:22.444 13:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.444 13:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.012 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.012 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:23.271 [2024-07-26 13:14:03.774464] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.271 13:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.531 13:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.531 "name": "Existed_Raid", 00:14:23.531 "uuid": "90a31ed1-d7bb-4050-b84d-a2f39948c203", 00:14:23.531 "strip_size_kb": 64, 00:14:23.531 "state": "configuring", 00:14:23.531 "raid_level": "raid0", 00:14:23.531 "superblock": true, 00:14:23.531 "num_base_bdevs": 3, 00:14:23.531 "num_base_bdevs_discovered": 1, 00:14:23.531 "num_base_bdevs_operational": 3, 00:14:23.531 "base_bdevs_list": [ 00:14:23.531 { 00:14:23.531 "name": "BaseBdev1", 00:14:23.531 "uuid": "e1b28569-7c16-43fd-800f-d74416a62fee", 00:14:23.531 "is_configured": true, 00:14:23.531 "data_offset": 2048, 00:14:23.531 "data_size": 63488 00:14:23.531 }, 00:14:23.531 { 00:14:23.531 "name": null, 00:14:23.531 "uuid": "810a58ff-353d-462a-ab57-efcaa25c12fe", 00:14:23.531 "is_configured": false, 00:14:23.531 "data_offset": 2048, 00:14:23.531 "data_size": 63488 00:14:23.531 }, 00:14:23.531 { 00:14:23.531 "name": null, 00:14:23.531 "uuid": "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf", 00:14:23.531 "is_configured": false, 00:14:23.531 "data_offset": 2048, 00:14:23.531 "data_size": 63488 00:14:23.531 } 00:14:23.531 ] 00:14:23.531 }' 00:14:23.531 13:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.531 13:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.099 13:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.099 13:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.358 13:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:24.358 13:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:24.618 [2024-07-26 13:14:05.025782] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.618 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:24.618 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:24.618 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:24.618 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:24.618 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:24.618 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:24.618 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.618 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.618 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.618 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.618 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.618 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.878 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.878 "name": "Existed_Raid", 00:14:24.878 "uuid": "90a31ed1-d7bb-4050-b84d-a2f39948c203", 00:14:24.878 "strip_size_kb": 64, 00:14:24.878 "state": "configuring", 00:14:24.878 "raid_level": "raid0", 00:14:24.878 "superblock": true, 00:14:24.878 "num_base_bdevs": 3, 00:14:24.878 "num_base_bdevs_discovered": 2, 00:14:24.878 "num_base_bdevs_operational": 3, 00:14:24.878 "base_bdevs_list": [ 00:14:24.878 { 00:14:24.878 "name": "BaseBdev1", 00:14:24.878 "uuid": "e1b28569-7c16-43fd-800f-d74416a62fee", 00:14:24.878 "is_configured": true, 00:14:24.878 "data_offset": 2048, 00:14:24.878 "data_size": 63488 00:14:24.878 }, 00:14:24.878 { 00:14:24.878 "name": null, 00:14:24.878 "uuid": "810a58ff-353d-462a-ab57-efcaa25c12fe", 00:14:24.878 "is_configured": false, 00:14:24.878 "data_offset": 2048, 00:14:24.878 "data_size": 63488 00:14:24.878 }, 00:14:24.878 { 00:14:24.878 "name": "BaseBdev3", 00:14:24.878 "uuid": "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf", 00:14:24.878 "is_configured": true, 00:14:24.878 "data_offset": 2048, 00:14:24.878 "data_size": 63488 00:14:24.878 } 00:14:24.878 ] 00:14:24.878 }' 00:14:24.878 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.878 13:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.446 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.446 13:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:25.705 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:25.706 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:25.965 [2024-07-26 13:14:06.249022] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.965 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:25.965 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:25.965 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:25.965 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:25.965 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:25.965 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:25.965 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.965 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.965 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.965 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.965 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.965 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.225 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.225 "name": "Existed_Raid", 00:14:26.225 "uuid": "90a31ed1-d7bb-4050-b84d-a2f39948c203", 00:14:26.225 "strip_size_kb": 64, 00:14:26.225 "state": "configuring", 00:14:26.225 "raid_level": "raid0", 00:14:26.225 "superblock": true, 00:14:26.225 "num_base_bdevs": 3, 00:14:26.225 "num_base_bdevs_discovered": 1, 00:14:26.225 "num_base_bdevs_operational": 3, 00:14:26.225 "base_bdevs_list": [ 00:14:26.225 { 00:14:26.225 "name": null, 00:14:26.225 "uuid": "e1b28569-7c16-43fd-800f-d74416a62fee", 00:14:26.225 "is_configured": false, 00:14:26.225 "data_offset": 2048, 00:14:26.225 "data_size": 63488 00:14:26.225 }, 00:14:26.225 { 00:14:26.225 "name": null, 00:14:26.225 "uuid": "810a58ff-353d-462a-ab57-efcaa25c12fe", 00:14:26.225 "is_configured": false, 00:14:26.225 "data_offset": 2048, 00:14:26.225 "data_size": 63488 00:14:26.225 }, 00:14:26.225 { 00:14:26.225 "name": "BaseBdev3", 00:14:26.225 "uuid": "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf", 00:14:26.225 "is_configured": true, 00:14:26.225 "data_offset": 2048, 00:14:26.225 "data_size": 63488 00:14:26.225 } 00:14:26.225 ] 00:14:26.225 }' 00:14:26.225 13:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.225 13:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.827 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:26.827 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.827 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:26.827 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:27.093 [2024-07-26 13:14:07.518444] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:27.093 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:27.093 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:27.093 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:27.093 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:27.093 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:27.093 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:27.093 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.093 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.093 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.093 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.093 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.093 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.352 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.352 "name": "Existed_Raid", 00:14:27.352 "uuid": "90a31ed1-d7bb-4050-b84d-a2f39948c203", 00:14:27.352 "strip_size_kb": 64, 00:14:27.352 "state": "configuring", 00:14:27.352 "raid_level": "raid0", 00:14:27.352 "superblock": true, 00:14:27.352 "num_base_bdevs": 3, 00:14:27.352 "num_base_bdevs_discovered": 2, 00:14:27.352 "num_base_bdevs_operational": 3, 00:14:27.352 "base_bdevs_list": [ 00:14:27.352 { 00:14:27.352 "name": null, 00:14:27.352 "uuid": "e1b28569-7c16-43fd-800f-d74416a62fee", 00:14:27.352 "is_configured": false, 00:14:27.352 "data_offset": 2048, 00:14:27.352 "data_size": 63488 00:14:27.352 }, 00:14:27.352 { 00:14:27.352 "name": "BaseBdev2", 00:14:27.352 "uuid": "810a58ff-353d-462a-ab57-efcaa25c12fe", 00:14:27.352 "is_configured": true, 00:14:27.352 "data_offset": 2048, 00:14:27.352 "data_size": 63488 00:14:27.352 }, 00:14:27.352 { 00:14:27.352 "name": "BaseBdev3", 00:14:27.352 "uuid": "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf", 00:14:27.353 "is_configured": true, 00:14:27.353 "data_offset": 2048, 00:14:27.353 "data_size": 63488 00:14:27.353 } 00:14:27.353 ] 00:14:27.353 }' 00:14:27.353 13:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.353 13:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.921 13:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.921 13:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:28.180 13:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:28.180 13:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.180 13:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:28.440 13:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e1b28569-7c16-43fd-800f-d74416a62fee 00:14:28.440 [2024-07-26 13:14:08.961317] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:28.440 [2024-07-26 13:14:08.961447] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xf29040 00:14:28.440 [2024-07-26 13:14:08.961459] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:28.440 [2024-07-26 13:14:08.961627] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10dbe40 00:14:28.440 [2024-07-26 13:14:08.961729] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf29040 00:14:28.440 [2024-07-26 13:14:08.961738] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf29040 00:14:28.440 [2024-07-26 13:14:08.961818] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.440 NewBaseBdev 00:14:28.699 13:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:28.699 13:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:28.699 13:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:28.699 13:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:28.699 13:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:28.699 13:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:28.699 13:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:28.699 13:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:28.958 [ 00:14:28.958 { 00:14:28.958 "name": "NewBaseBdev", 00:14:28.958 "aliases": [ 00:14:28.958 "e1b28569-7c16-43fd-800f-d74416a62fee" 00:14:28.958 ], 00:14:28.958 "product_name": "Malloc disk", 00:14:28.958 "block_size": 512, 00:14:28.958 "num_blocks": 65536, 00:14:28.958 "uuid": "e1b28569-7c16-43fd-800f-d74416a62fee", 00:14:28.958 "assigned_rate_limits": { 00:14:28.958 "rw_ios_per_sec": 0, 00:14:28.958 "rw_mbytes_per_sec": 0, 00:14:28.958 "r_mbytes_per_sec": 0, 00:14:28.958 "w_mbytes_per_sec": 0 00:14:28.958 }, 00:14:28.958 "claimed": true, 00:14:28.959 "claim_type": "exclusive_write", 00:14:28.959 "zoned": false, 00:14:28.959 "supported_io_types": { 00:14:28.959 "read": true, 00:14:28.959 "write": true, 00:14:28.959 "unmap": true, 00:14:28.959 "flush": true, 00:14:28.959 "reset": true, 00:14:28.959 "nvme_admin": false, 00:14:28.959 "nvme_io": false, 00:14:28.959 "nvme_io_md": false, 00:14:28.959 "write_zeroes": true, 00:14:28.959 "zcopy": true, 00:14:28.959 "get_zone_info": false, 00:14:28.959 "zone_management": false, 00:14:28.959 "zone_append": false, 00:14:28.959 "compare": false, 00:14:28.959 "compare_and_write": false, 00:14:28.959 "abort": true, 00:14:28.959 "seek_hole": false, 00:14:28.959 "seek_data": false, 00:14:28.959 "copy": true, 00:14:28.959 "nvme_iov_md": false 00:14:28.959 }, 00:14:28.959 "memory_domains": [ 00:14:28.959 { 00:14:28.959 "dma_device_id": "system", 00:14:28.959 "dma_device_type": 1 00:14:28.959 }, 00:14:28.959 { 00:14:28.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.959 "dma_device_type": 2 00:14:28.959 } 00:14:28.959 ], 00:14:28.959 "driver_specific": {} 00:14:28.959 } 00:14:28.959 ] 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.959 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.218 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:29.218 "name": "Existed_Raid", 00:14:29.218 "uuid": "90a31ed1-d7bb-4050-b84d-a2f39948c203", 00:14:29.218 "strip_size_kb": 64, 00:14:29.218 "state": "online", 00:14:29.218 "raid_level": "raid0", 00:14:29.218 "superblock": true, 00:14:29.218 "num_base_bdevs": 3, 00:14:29.218 "num_base_bdevs_discovered": 3, 00:14:29.218 "num_base_bdevs_operational": 3, 00:14:29.218 "base_bdevs_list": [ 00:14:29.218 { 00:14:29.218 "name": "NewBaseBdev", 00:14:29.218 "uuid": "e1b28569-7c16-43fd-800f-d74416a62fee", 00:14:29.218 "is_configured": true, 00:14:29.218 "data_offset": 2048, 00:14:29.218 "data_size": 63488 00:14:29.218 }, 00:14:29.218 { 00:14:29.218 "name": "BaseBdev2", 00:14:29.218 "uuid": "810a58ff-353d-462a-ab57-efcaa25c12fe", 00:14:29.218 "is_configured": true, 00:14:29.218 "data_offset": 2048, 00:14:29.218 "data_size": 63488 00:14:29.218 }, 00:14:29.218 { 00:14:29.218 "name": "BaseBdev3", 00:14:29.218 "uuid": "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf", 00:14:29.218 "is_configured": true, 00:14:29.218 "data_offset": 2048, 00:14:29.218 "data_size": 63488 00:14:29.218 } 00:14:29.218 ] 00:14:29.218 }' 00:14:29.218 13:14:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:29.218 13:14:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.786 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:29.786 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:29.786 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:29.786 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:29.786 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:29.786 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:29.786 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:29.786 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:30.044 [2024-07-26 13:14:10.437488] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.044 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:30.044 "name": "Existed_Raid", 00:14:30.044 "aliases": [ 00:14:30.044 "90a31ed1-d7bb-4050-b84d-a2f39948c203" 00:14:30.044 ], 00:14:30.044 "product_name": "Raid Volume", 00:14:30.044 "block_size": 512, 00:14:30.044 "num_blocks": 190464, 00:14:30.044 "uuid": "90a31ed1-d7bb-4050-b84d-a2f39948c203", 00:14:30.044 "assigned_rate_limits": { 00:14:30.044 "rw_ios_per_sec": 0, 00:14:30.044 "rw_mbytes_per_sec": 0, 00:14:30.044 "r_mbytes_per_sec": 0, 00:14:30.044 "w_mbytes_per_sec": 0 00:14:30.044 }, 00:14:30.044 "claimed": false, 00:14:30.044 "zoned": false, 00:14:30.044 "supported_io_types": { 00:14:30.044 "read": true, 00:14:30.044 "write": true, 00:14:30.044 "unmap": true, 00:14:30.044 "flush": true, 00:14:30.044 "reset": true, 00:14:30.044 "nvme_admin": false, 00:14:30.044 "nvme_io": false, 00:14:30.044 "nvme_io_md": false, 00:14:30.044 "write_zeroes": true, 00:14:30.044 "zcopy": false, 00:14:30.044 "get_zone_info": false, 00:14:30.044 "zone_management": false, 00:14:30.044 "zone_append": false, 00:14:30.044 "compare": false, 00:14:30.044 "compare_and_write": false, 00:14:30.044 "abort": false, 00:14:30.044 "seek_hole": false, 00:14:30.044 "seek_data": false, 00:14:30.044 "copy": false, 00:14:30.044 "nvme_iov_md": false 00:14:30.044 }, 00:14:30.044 "memory_domains": [ 00:14:30.044 { 00:14:30.044 "dma_device_id": "system", 00:14:30.044 "dma_device_type": 1 00:14:30.044 }, 00:14:30.044 { 00:14:30.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.044 "dma_device_type": 2 00:14:30.044 }, 00:14:30.044 { 00:14:30.044 "dma_device_id": "system", 00:14:30.044 "dma_device_type": 1 00:14:30.044 }, 00:14:30.044 { 00:14:30.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.044 "dma_device_type": 2 00:14:30.044 }, 00:14:30.044 { 00:14:30.044 "dma_device_id": "system", 00:14:30.044 "dma_device_type": 1 00:14:30.044 }, 00:14:30.044 { 00:14:30.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.044 "dma_device_type": 2 00:14:30.044 } 00:14:30.044 ], 00:14:30.044 "driver_specific": { 00:14:30.044 "raid": { 00:14:30.044 "uuid": "90a31ed1-d7bb-4050-b84d-a2f39948c203", 00:14:30.044 "strip_size_kb": 64, 00:14:30.044 "state": "online", 00:14:30.044 "raid_level": "raid0", 00:14:30.044 "superblock": true, 00:14:30.044 "num_base_bdevs": 3, 00:14:30.044 "num_base_bdevs_discovered": 3, 00:14:30.044 "num_base_bdevs_operational": 3, 00:14:30.044 "base_bdevs_list": [ 00:14:30.044 { 00:14:30.044 "name": "NewBaseBdev", 00:14:30.044 "uuid": "e1b28569-7c16-43fd-800f-d74416a62fee", 00:14:30.044 "is_configured": true, 00:14:30.044 "data_offset": 2048, 00:14:30.044 "data_size": 63488 00:14:30.044 }, 00:14:30.045 { 00:14:30.045 "name": "BaseBdev2", 00:14:30.045 "uuid": "810a58ff-353d-462a-ab57-efcaa25c12fe", 00:14:30.045 "is_configured": true, 00:14:30.045 "data_offset": 2048, 00:14:30.045 "data_size": 63488 00:14:30.045 }, 00:14:30.045 { 00:14:30.045 "name": "BaseBdev3", 00:14:30.045 "uuid": "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf", 00:14:30.045 "is_configured": true, 00:14:30.045 "data_offset": 2048, 00:14:30.045 "data_size": 63488 00:14:30.045 } 00:14:30.045 ] 00:14:30.045 } 00:14:30.045 } 00:14:30.045 }' 00:14:30.045 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:30.045 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:30.045 BaseBdev2 00:14:30.045 BaseBdev3' 00:14:30.045 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.045 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:30.045 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:30.304 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:30.304 "name": "NewBaseBdev", 00:14:30.304 "aliases": [ 00:14:30.304 "e1b28569-7c16-43fd-800f-d74416a62fee" 00:14:30.304 ], 00:14:30.304 "product_name": "Malloc disk", 00:14:30.304 "block_size": 512, 00:14:30.304 "num_blocks": 65536, 00:14:30.304 "uuid": "e1b28569-7c16-43fd-800f-d74416a62fee", 00:14:30.304 "assigned_rate_limits": { 00:14:30.304 "rw_ios_per_sec": 0, 00:14:30.304 "rw_mbytes_per_sec": 0, 00:14:30.304 "r_mbytes_per_sec": 0, 00:14:30.304 "w_mbytes_per_sec": 0 00:14:30.304 }, 00:14:30.304 "claimed": true, 00:14:30.304 "claim_type": "exclusive_write", 00:14:30.304 "zoned": false, 00:14:30.304 "supported_io_types": { 00:14:30.304 "read": true, 00:14:30.304 "write": true, 00:14:30.304 "unmap": true, 00:14:30.304 "flush": true, 00:14:30.304 "reset": true, 00:14:30.304 "nvme_admin": false, 00:14:30.304 "nvme_io": false, 00:14:30.304 "nvme_io_md": false, 00:14:30.304 "write_zeroes": true, 00:14:30.304 "zcopy": true, 00:14:30.304 "get_zone_info": false, 00:14:30.304 "zone_management": false, 00:14:30.304 "zone_append": false, 00:14:30.304 "compare": false, 00:14:30.304 "compare_and_write": false, 00:14:30.304 "abort": true, 00:14:30.304 "seek_hole": false, 00:14:30.304 "seek_data": false, 00:14:30.304 "copy": true, 00:14:30.304 "nvme_iov_md": false 00:14:30.304 }, 00:14:30.304 "memory_domains": [ 00:14:30.304 { 00:14:30.304 "dma_device_id": "system", 00:14:30.304 "dma_device_type": 1 00:14:30.304 }, 00:14:30.304 { 00:14:30.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.304 "dma_device_type": 2 00:14:30.304 } 00:14:30.304 ], 00:14:30.304 "driver_specific": {} 00:14:30.304 }' 00:14:30.304 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.304 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.304 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:30.304 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.563 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.563 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.563 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.563 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.563 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.563 13:14:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.563 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.563 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.563 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.563 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:30.563 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:30.822 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:30.822 "name": "BaseBdev2", 00:14:30.822 "aliases": [ 00:14:30.822 "810a58ff-353d-462a-ab57-efcaa25c12fe" 00:14:30.822 ], 00:14:30.822 "product_name": "Malloc disk", 00:14:30.822 "block_size": 512, 00:14:30.822 "num_blocks": 65536, 00:14:30.822 "uuid": "810a58ff-353d-462a-ab57-efcaa25c12fe", 00:14:30.822 "assigned_rate_limits": { 00:14:30.822 "rw_ios_per_sec": 0, 00:14:30.822 "rw_mbytes_per_sec": 0, 00:14:30.822 "r_mbytes_per_sec": 0, 00:14:30.822 "w_mbytes_per_sec": 0 00:14:30.822 }, 00:14:30.822 "claimed": true, 00:14:30.822 "claim_type": "exclusive_write", 00:14:30.822 "zoned": false, 00:14:30.823 "supported_io_types": { 00:14:30.823 "read": true, 00:14:30.823 "write": true, 00:14:30.823 "unmap": true, 00:14:30.823 "flush": true, 00:14:30.823 "reset": true, 00:14:30.823 "nvme_admin": false, 00:14:30.823 "nvme_io": false, 00:14:30.823 "nvme_io_md": false, 00:14:30.823 "write_zeroes": true, 00:14:30.823 "zcopy": true, 00:14:30.823 "get_zone_info": false, 00:14:30.823 "zone_management": false, 00:14:30.823 "zone_append": false, 00:14:30.823 "compare": false, 00:14:30.823 "compare_and_write": false, 00:14:30.823 "abort": true, 00:14:30.823 "seek_hole": false, 00:14:30.823 "seek_data": false, 00:14:30.823 "copy": true, 00:14:30.823 "nvme_iov_md": false 00:14:30.823 }, 00:14:30.823 "memory_domains": [ 00:14:30.823 { 00:14:30.823 "dma_device_id": "system", 00:14:30.823 "dma_device_type": 1 00:14:30.823 }, 00:14:30.823 { 00:14:30.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.823 "dma_device_type": 2 00:14:30.823 } 00:14:30.823 ], 00:14:30.823 "driver_specific": {} 00:14:30.823 }' 00:14:30.823 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.823 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.081 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:31.081 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.081 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.081 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:31.081 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.081 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.081 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:31.082 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.082 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.340 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:31.340 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:31.340 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:31.340 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:31.340 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:31.340 "name": "BaseBdev3", 00:14:31.340 "aliases": [ 00:14:31.340 "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf" 00:14:31.340 ], 00:14:31.340 "product_name": "Malloc disk", 00:14:31.340 "block_size": 512, 00:14:31.340 "num_blocks": 65536, 00:14:31.340 "uuid": "cfe92f83-91d7-435e-b5bb-c6b0e4f86edf", 00:14:31.340 "assigned_rate_limits": { 00:14:31.340 "rw_ios_per_sec": 0, 00:14:31.340 "rw_mbytes_per_sec": 0, 00:14:31.340 "r_mbytes_per_sec": 0, 00:14:31.340 "w_mbytes_per_sec": 0 00:14:31.340 }, 00:14:31.340 "claimed": true, 00:14:31.340 "claim_type": "exclusive_write", 00:14:31.340 "zoned": false, 00:14:31.340 "supported_io_types": { 00:14:31.340 "read": true, 00:14:31.340 "write": true, 00:14:31.340 "unmap": true, 00:14:31.340 "flush": true, 00:14:31.340 "reset": true, 00:14:31.340 "nvme_admin": false, 00:14:31.340 "nvme_io": false, 00:14:31.340 "nvme_io_md": false, 00:14:31.340 "write_zeroes": true, 00:14:31.340 "zcopy": true, 00:14:31.340 "get_zone_info": false, 00:14:31.340 "zone_management": false, 00:14:31.340 "zone_append": false, 00:14:31.340 "compare": false, 00:14:31.340 "compare_and_write": false, 00:14:31.340 "abort": true, 00:14:31.340 "seek_hole": false, 00:14:31.340 "seek_data": false, 00:14:31.340 "copy": true, 00:14:31.340 "nvme_iov_md": false 00:14:31.340 }, 00:14:31.340 "memory_domains": [ 00:14:31.340 { 00:14:31.340 "dma_device_id": "system", 00:14:31.340 "dma_device_type": 1 00:14:31.340 }, 00:14:31.340 { 00:14:31.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.340 "dma_device_type": 2 00:14:31.340 } 00:14:31.340 ], 00:14:31.340 "driver_specific": {} 00:14:31.340 }' 00:14:31.340 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.599 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.599 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:31.599 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.599 13:14:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.599 13:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:31.599 13:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.599 13:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.599 13:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:31.599 13:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.858 13:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.858 13:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:31.858 13:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:32.116 [2024-07-26 13:14:12.386356] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.116 [2024-07-26 13:14:12.386380] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.116 [2024-07-26 13:14:12.386428] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.116 [2024-07-26 13:14:12.386475] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.116 [2024-07-26 13:14:12.386485] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf29040 name Existed_Raid, state offline 00:14:32.116 13:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 684034 00:14:32.116 13:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 684034 ']' 00:14:32.116 13:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 684034 00:14:32.116 13:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:32.116 13:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.116 13:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 684034 00:14:32.116 13:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:32.117 13:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:32.117 13:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 684034' 00:14:32.117 killing process with pid 684034 00:14:32.117 13:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 684034 00:14:32.117 [2024-07-26 13:14:12.460539] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.117 13:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 684034 00:14:32.117 [2024-07-26 13:14:12.483732] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.376 13:14:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:32.376 00:14:32.376 real 0m27.445s 00:14:32.376 user 0m50.330s 00:14:32.376 sys 0m4.945s 00:14:32.376 13:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:32.376 13:14:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.376 ************************************ 00:14:32.376 END TEST raid_state_function_test_sb 00:14:32.376 ************************************ 00:14:32.376 13:14:12 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:14:32.376 13:14:12 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:32.376 13:14:12 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:32.376 13:14:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.376 ************************************ 00:14:32.376 START TEST raid_superblock_test 00:14:32.376 ************************************ 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=689389 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 689389 /var/tmp/spdk-raid.sock 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 689389 ']' 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:32.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.376 13:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.376 [2024-07-26 13:14:12.820753] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:14:32.376 [2024-07-26 13:14:12.820807] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689389 ] 00:14:32.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.376 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:32.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.376 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:32.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.376 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:32.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.376 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:32.376 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.376 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:32.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:32.377 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:32.636 [2024-07-26 13:14:12.953131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.636 [2024-07-26 13:14:13.039886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.636 [2024-07-26 13:14:13.094078] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.636 [2024-07-26 13:14:13.094106] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.207 13:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.207 13:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:33.207 13:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:14:33.207 13:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:33.207 13:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:14:33.207 13:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:14:33.207 13:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:33.207 13:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.207 13:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.207 13:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.207 13:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:33.466 malloc1 00:14:33.466 13:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:33.725 [2024-07-26 13:14:14.165718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:33.725 [2024-07-26 13:14:14.165760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.725 [2024-07-26 13:14:14.165783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12f32f0 00:14:33.725 [2024-07-26 13:14:14.165794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.725 [2024-07-26 13:14:14.167301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.725 [2024-07-26 13:14:14.167328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:33.725 pt1 00:14:33.725 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:33.725 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:33.725 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:14:33.725 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:14:33.725 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:33.725 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:33.725 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:33.725 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:33.725 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:33.983 malloc2 00:14:33.983 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:34.243 [2024-07-26 13:14:14.611435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:34.243 [2024-07-26 13:14:14.611473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.243 [2024-07-26 13:14:14.611489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12f46d0 00:14:34.243 [2024-07-26 13:14:14.611500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.243 [2024-07-26 13:14:14.612884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.243 [2024-07-26 13:14:14.612910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:34.243 pt2 00:14:34.243 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:34.243 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:34.243 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:14:34.243 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:14:34.243 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:34.243 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:34.243 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:34.243 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:34.243 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:34.501 malloc3 00:14:34.502 13:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:34.760 [2024-07-26 13:14:15.072857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:34.760 [2024-07-26 13:14:15.072897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.760 [2024-07-26 13:14:15.072913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x148d6b0 00:14:34.760 [2024-07-26 13:14:15.072924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.760 [2024-07-26 13:14:15.074280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.760 [2024-07-26 13:14:15.074309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:34.760 pt3 00:14:34.760 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:34.760 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:34.760 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:14:35.019 [2024-07-26 13:14:15.289444] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:35.019 [2024-07-26 13:14:15.290557] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:35.019 [2024-07-26 13:14:15.290607] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:35.019 [2024-07-26 13:14:15.290734] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x148dcb0 00:14:35.019 [2024-07-26 13:14:15.290744] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:35.019 [2024-07-26 13:14:15.290923] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x148d5a0 00:14:35.019 [2024-07-26 13:14:15.291049] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x148dcb0 00:14:35.019 [2024-07-26 13:14:15.291059] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x148dcb0 00:14:35.019 [2024-07-26 13:14:15.291168] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.019 "name": "raid_bdev1", 00:14:35.019 "uuid": "0018f975-e1a5-467f-9930-9bec550dc74a", 00:14:35.019 "strip_size_kb": 64, 00:14:35.019 "state": "online", 00:14:35.019 "raid_level": "raid0", 00:14:35.019 "superblock": true, 00:14:35.019 "num_base_bdevs": 3, 00:14:35.019 "num_base_bdevs_discovered": 3, 00:14:35.019 "num_base_bdevs_operational": 3, 00:14:35.019 "base_bdevs_list": [ 00:14:35.019 { 00:14:35.019 "name": "pt1", 00:14:35.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.019 "is_configured": true, 00:14:35.019 "data_offset": 2048, 00:14:35.019 "data_size": 63488 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "name": "pt2", 00:14:35.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.019 "is_configured": true, 00:14:35.019 "data_offset": 2048, 00:14:35.019 "data_size": 63488 00:14:35.019 }, 00:14:35.019 { 00:14:35.019 "name": "pt3", 00:14:35.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.019 "is_configured": true, 00:14:35.019 "data_offset": 2048, 00:14:35.019 "data_size": 63488 00:14:35.019 } 00:14:35.019 ] 00:14:35.019 }' 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.019 13:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.587 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:14:35.587 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:35.587 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:35.587 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:35.587 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:35.587 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:35.587 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:35.587 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:35.846 [2024-07-26 13:14:16.240180] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.846 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:35.846 "name": "raid_bdev1", 00:14:35.846 "aliases": [ 00:14:35.846 "0018f975-e1a5-467f-9930-9bec550dc74a" 00:14:35.846 ], 00:14:35.846 "product_name": "Raid Volume", 00:14:35.846 "block_size": 512, 00:14:35.846 "num_blocks": 190464, 00:14:35.846 "uuid": "0018f975-e1a5-467f-9930-9bec550dc74a", 00:14:35.846 "assigned_rate_limits": { 00:14:35.846 "rw_ios_per_sec": 0, 00:14:35.846 "rw_mbytes_per_sec": 0, 00:14:35.846 "r_mbytes_per_sec": 0, 00:14:35.846 "w_mbytes_per_sec": 0 00:14:35.846 }, 00:14:35.846 "claimed": false, 00:14:35.846 "zoned": false, 00:14:35.846 "supported_io_types": { 00:14:35.846 "read": true, 00:14:35.846 "write": true, 00:14:35.846 "unmap": true, 00:14:35.846 "flush": true, 00:14:35.846 "reset": true, 00:14:35.846 "nvme_admin": false, 00:14:35.846 "nvme_io": false, 00:14:35.846 "nvme_io_md": false, 00:14:35.846 "write_zeroes": true, 00:14:35.846 "zcopy": false, 00:14:35.846 "get_zone_info": false, 00:14:35.846 "zone_management": false, 00:14:35.846 "zone_append": false, 00:14:35.846 "compare": false, 00:14:35.846 "compare_and_write": false, 00:14:35.846 "abort": false, 00:14:35.846 "seek_hole": false, 00:14:35.846 "seek_data": false, 00:14:35.846 "copy": false, 00:14:35.846 "nvme_iov_md": false 00:14:35.846 }, 00:14:35.846 "memory_domains": [ 00:14:35.846 { 00:14:35.846 "dma_device_id": "system", 00:14:35.846 "dma_device_type": 1 00:14:35.846 }, 00:14:35.846 { 00:14:35.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.846 "dma_device_type": 2 00:14:35.846 }, 00:14:35.846 { 00:14:35.846 "dma_device_id": "system", 00:14:35.846 "dma_device_type": 1 00:14:35.846 }, 00:14:35.846 { 00:14:35.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.846 "dma_device_type": 2 00:14:35.846 }, 00:14:35.846 { 00:14:35.846 "dma_device_id": "system", 00:14:35.846 "dma_device_type": 1 00:14:35.846 }, 00:14:35.846 { 00:14:35.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.846 "dma_device_type": 2 00:14:35.846 } 00:14:35.846 ], 00:14:35.846 "driver_specific": { 00:14:35.846 "raid": { 00:14:35.846 "uuid": "0018f975-e1a5-467f-9930-9bec550dc74a", 00:14:35.846 "strip_size_kb": 64, 00:14:35.846 "state": "online", 00:14:35.846 "raid_level": "raid0", 00:14:35.846 "superblock": true, 00:14:35.846 "num_base_bdevs": 3, 00:14:35.846 "num_base_bdevs_discovered": 3, 00:14:35.846 "num_base_bdevs_operational": 3, 00:14:35.846 "base_bdevs_list": [ 00:14:35.846 { 00:14:35.846 "name": "pt1", 00:14:35.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:35.846 "is_configured": true, 00:14:35.846 "data_offset": 2048, 00:14:35.846 "data_size": 63488 00:14:35.846 }, 00:14:35.846 { 00:14:35.846 "name": "pt2", 00:14:35.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:35.846 "is_configured": true, 00:14:35.846 "data_offset": 2048, 00:14:35.846 "data_size": 63488 00:14:35.846 }, 00:14:35.846 { 00:14:35.846 "name": "pt3", 00:14:35.846 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:35.846 "is_configured": true, 00:14:35.846 "data_offset": 2048, 00:14:35.846 "data_size": 63488 00:14:35.846 } 00:14:35.846 ] 00:14:35.846 } 00:14:35.846 } 00:14:35.846 }' 00:14:35.846 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:35.846 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:35.846 pt2 00:14:35.846 pt3' 00:14:35.846 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:35.846 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:35.846 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:36.105 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:36.105 "name": "pt1", 00:14:36.105 "aliases": [ 00:14:36.105 "00000000-0000-0000-0000-000000000001" 00:14:36.105 ], 00:14:36.105 "product_name": "passthru", 00:14:36.105 "block_size": 512, 00:14:36.105 "num_blocks": 65536, 00:14:36.105 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:36.105 "assigned_rate_limits": { 00:14:36.105 "rw_ios_per_sec": 0, 00:14:36.105 "rw_mbytes_per_sec": 0, 00:14:36.105 "r_mbytes_per_sec": 0, 00:14:36.105 "w_mbytes_per_sec": 0 00:14:36.105 }, 00:14:36.105 "claimed": true, 00:14:36.105 "claim_type": "exclusive_write", 00:14:36.105 "zoned": false, 00:14:36.105 "supported_io_types": { 00:14:36.105 "read": true, 00:14:36.105 "write": true, 00:14:36.105 "unmap": true, 00:14:36.105 "flush": true, 00:14:36.105 "reset": true, 00:14:36.105 "nvme_admin": false, 00:14:36.105 "nvme_io": false, 00:14:36.105 "nvme_io_md": false, 00:14:36.105 "write_zeroes": true, 00:14:36.105 "zcopy": true, 00:14:36.105 "get_zone_info": false, 00:14:36.105 "zone_management": false, 00:14:36.105 "zone_append": false, 00:14:36.105 "compare": false, 00:14:36.105 "compare_and_write": false, 00:14:36.105 "abort": true, 00:14:36.105 "seek_hole": false, 00:14:36.105 "seek_data": false, 00:14:36.105 "copy": true, 00:14:36.105 "nvme_iov_md": false 00:14:36.105 }, 00:14:36.105 "memory_domains": [ 00:14:36.105 { 00:14:36.105 "dma_device_id": "system", 00:14:36.105 "dma_device_type": 1 00:14:36.105 }, 00:14:36.105 { 00:14:36.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.105 "dma_device_type": 2 00:14:36.105 } 00:14:36.105 ], 00:14:36.105 "driver_specific": { 00:14:36.105 "passthru": { 00:14:36.105 "name": "pt1", 00:14:36.105 "base_bdev_name": "malloc1" 00:14:36.105 } 00:14:36.105 } 00:14:36.105 }' 00:14:36.105 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.105 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.105 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:36.105 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.364 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.364 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:36.364 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.364 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.364 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:36.364 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.364 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.364 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:36.364 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:36.364 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:36.364 13:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:36.622 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:36.623 "name": "pt2", 00:14:36.623 "aliases": [ 00:14:36.623 "00000000-0000-0000-0000-000000000002" 00:14:36.623 ], 00:14:36.623 "product_name": "passthru", 00:14:36.623 "block_size": 512, 00:14:36.623 "num_blocks": 65536, 00:14:36.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:36.623 "assigned_rate_limits": { 00:14:36.623 "rw_ios_per_sec": 0, 00:14:36.623 "rw_mbytes_per_sec": 0, 00:14:36.623 "r_mbytes_per_sec": 0, 00:14:36.623 "w_mbytes_per_sec": 0 00:14:36.623 }, 00:14:36.623 "claimed": true, 00:14:36.623 "claim_type": "exclusive_write", 00:14:36.623 "zoned": false, 00:14:36.623 "supported_io_types": { 00:14:36.623 "read": true, 00:14:36.623 "write": true, 00:14:36.623 "unmap": true, 00:14:36.623 "flush": true, 00:14:36.623 "reset": true, 00:14:36.623 "nvme_admin": false, 00:14:36.623 "nvme_io": false, 00:14:36.623 "nvme_io_md": false, 00:14:36.623 "write_zeroes": true, 00:14:36.623 "zcopy": true, 00:14:36.623 "get_zone_info": false, 00:14:36.623 "zone_management": false, 00:14:36.623 "zone_append": false, 00:14:36.623 "compare": false, 00:14:36.623 "compare_and_write": false, 00:14:36.623 "abort": true, 00:14:36.623 "seek_hole": false, 00:14:36.623 "seek_data": false, 00:14:36.623 "copy": true, 00:14:36.623 "nvme_iov_md": false 00:14:36.623 }, 00:14:36.623 "memory_domains": [ 00:14:36.623 { 00:14:36.623 "dma_device_id": "system", 00:14:36.623 "dma_device_type": 1 00:14:36.623 }, 00:14:36.623 { 00:14:36.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.623 "dma_device_type": 2 00:14:36.623 } 00:14:36.623 ], 00:14:36.623 "driver_specific": { 00:14:36.623 "passthru": { 00:14:36.623 "name": "pt2", 00:14:36.623 "base_bdev_name": "malloc2" 00:14:36.623 } 00:14:36.623 } 00:14:36.623 }' 00:14:36.623 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.623 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:36.881 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:36.881 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.881 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:36.881 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:36.881 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.881 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:36.881 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:36.881 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:36.881 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:37.140 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:37.140 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:37.140 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:37.140 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:37.140 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:37.140 "name": "pt3", 00:14:37.140 "aliases": [ 00:14:37.140 "00000000-0000-0000-0000-000000000003" 00:14:37.140 ], 00:14:37.140 "product_name": "passthru", 00:14:37.140 "block_size": 512, 00:14:37.140 "num_blocks": 65536, 00:14:37.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:37.140 "assigned_rate_limits": { 00:14:37.140 "rw_ios_per_sec": 0, 00:14:37.140 "rw_mbytes_per_sec": 0, 00:14:37.140 "r_mbytes_per_sec": 0, 00:14:37.140 "w_mbytes_per_sec": 0 00:14:37.140 }, 00:14:37.140 "claimed": true, 00:14:37.140 "claim_type": "exclusive_write", 00:14:37.140 "zoned": false, 00:14:37.140 "supported_io_types": { 00:14:37.140 "read": true, 00:14:37.140 "write": true, 00:14:37.140 "unmap": true, 00:14:37.140 "flush": true, 00:14:37.140 "reset": true, 00:14:37.140 "nvme_admin": false, 00:14:37.140 "nvme_io": false, 00:14:37.140 "nvme_io_md": false, 00:14:37.140 "write_zeroes": true, 00:14:37.140 "zcopy": true, 00:14:37.140 "get_zone_info": false, 00:14:37.140 "zone_management": false, 00:14:37.140 "zone_append": false, 00:14:37.140 "compare": false, 00:14:37.140 "compare_and_write": false, 00:14:37.140 "abort": true, 00:14:37.140 "seek_hole": false, 00:14:37.140 "seek_data": false, 00:14:37.140 "copy": true, 00:14:37.140 "nvme_iov_md": false 00:14:37.140 }, 00:14:37.140 "memory_domains": [ 00:14:37.140 { 00:14:37.140 "dma_device_id": "system", 00:14:37.140 "dma_device_type": 1 00:14:37.140 }, 00:14:37.140 { 00:14:37.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.140 "dma_device_type": 2 00:14:37.140 } 00:14:37.140 ], 00:14:37.140 "driver_specific": { 00:14:37.140 "passthru": { 00:14:37.140 "name": "pt3", 00:14:37.140 "base_bdev_name": "malloc3" 00:14:37.140 } 00:14:37.140 } 00:14:37.140 }' 00:14:37.140 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:37.398 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:37.398 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:37.398 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:37.399 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:37.399 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:37.399 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:37.399 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:37.399 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:37.399 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:37.657 13:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:37.657 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:37.657 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:37.657 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:14:37.915 [2024-07-26 13:14:18.205374] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.915 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=0018f975-e1a5-467f-9930-9bec550dc74a 00:14:37.915 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 0018f975-e1a5-467f-9930-9bec550dc74a ']' 00:14:37.915 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:37.915 [2024-07-26 13:14:18.437711] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.915 [2024-07-26 13:14:18.437730] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.915 [2024-07-26 13:14:18.437777] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.915 [2024-07-26 13:14:18.437826] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.915 [2024-07-26 13:14:18.437838] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x148dcb0 name raid_bdev1, state offline 00:14:38.173 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.173 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:14:38.173 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:14:38.174 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:14:38.174 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.174 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:38.432 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.432 13:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:38.691 13:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:38.691 13:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:14:38.950 13:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:38.950 13:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:39.209 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:14:39.476 [2024-07-26 13:14:19.773172] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:39.476 [2024-07-26 13:14:19.774424] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:39.476 [2024-07-26 13:14:19.774463] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:39.476 [2024-07-26 13:14:19.774504] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:39.476 [2024-07-26 13:14:19.774539] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:39.476 [2024-07-26 13:14:19.774560] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:39.476 [2024-07-26 13:14:19.774576] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:39.476 [2024-07-26 13:14:19.774586] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x148dcb0 name raid_bdev1, state configuring 00:14:39.476 request: 00:14:39.476 { 00:14:39.476 "name": "raid_bdev1", 00:14:39.476 "raid_level": "raid0", 00:14:39.476 "base_bdevs": [ 00:14:39.476 "malloc1", 00:14:39.476 "malloc2", 00:14:39.476 "malloc3" 00:14:39.476 ], 00:14:39.476 "strip_size_kb": 64, 00:14:39.476 "superblock": false, 00:14:39.476 "method": "bdev_raid_create", 00:14:39.476 "req_id": 1 00:14:39.476 } 00:14:39.476 Got JSON-RPC error response 00:14:39.476 response: 00:14:39.476 { 00:14:39.476 "code": -17, 00:14:39.476 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:39.476 } 00:14:39.476 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:39.476 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:39.476 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:39.476 13:14:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:39.476 13:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.476 13:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:39.742 [2024-07-26 13:14:20.218276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:39.742 [2024-07-26 13:14:20.218321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.742 [2024-07-26 13:14:20.218338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x148ad00 00:14:39.742 [2024-07-26 13:14:20.218349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.742 [2024-07-26 13:14:20.219821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.742 [2024-07-26 13:14:20.219849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:39.742 [2024-07-26 13:14:20.219908] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:39.742 [2024-07-26 13:14:20.219934] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:39.742 pt1 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.742 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.001 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.001 "name": "raid_bdev1", 00:14:40.001 "uuid": "0018f975-e1a5-467f-9930-9bec550dc74a", 00:14:40.001 "strip_size_kb": 64, 00:14:40.001 "state": "configuring", 00:14:40.001 "raid_level": "raid0", 00:14:40.001 "superblock": true, 00:14:40.001 "num_base_bdevs": 3, 00:14:40.001 "num_base_bdevs_discovered": 1, 00:14:40.001 "num_base_bdevs_operational": 3, 00:14:40.001 "base_bdevs_list": [ 00:14:40.001 { 00:14:40.001 "name": "pt1", 00:14:40.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.001 "is_configured": true, 00:14:40.001 "data_offset": 2048, 00:14:40.001 "data_size": 63488 00:14:40.001 }, 00:14:40.001 { 00:14:40.001 "name": null, 00:14:40.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.001 "is_configured": false, 00:14:40.001 "data_offset": 2048, 00:14:40.001 "data_size": 63488 00:14:40.001 }, 00:14:40.001 { 00:14:40.001 "name": null, 00:14:40.001 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.001 "is_configured": false, 00:14:40.001 "data_offset": 2048, 00:14:40.001 "data_size": 63488 00:14:40.001 } 00:14:40.001 ] 00:14:40.001 }' 00:14:40.001 13:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.001 13:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.569 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:14:40.569 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.828 [2024-07-26 13:14:21.244982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.828 [2024-07-26 13:14:21.245025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.828 [2024-07-26 13:14:21.245041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1496d50 00:14:40.828 [2024-07-26 13:14:21.245052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.828 [2024-07-26 13:14:21.245366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.828 [2024-07-26 13:14:21.245383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.828 [2024-07-26 13:14:21.245440] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:40.828 [2024-07-26 13:14:21.245458] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.828 pt2 00:14:40.828 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:41.087 [2024-07-26 13:14:21.469578] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:41.087 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:41.087 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:41.087 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:41.087 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:41.087 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.087 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:41.087 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.087 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.087 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.087 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.087 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.087 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.346 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.346 "name": "raid_bdev1", 00:14:41.346 "uuid": "0018f975-e1a5-467f-9930-9bec550dc74a", 00:14:41.346 "strip_size_kb": 64, 00:14:41.346 "state": "configuring", 00:14:41.346 "raid_level": "raid0", 00:14:41.346 "superblock": true, 00:14:41.346 "num_base_bdevs": 3, 00:14:41.346 "num_base_bdevs_discovered": 1, 00:14:41.346 "num_base_bdevs_operational": 3, 00:14:41.346 "base_bdevs_list": [ 00:14:41.346 { 00:14:41.346 "name": "pt1", 00:14:41.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.346 "is_configured": true, 00:14:41.346 "data_offset": 2048, 00:14:41.346 "data_size": 63488 00:14:41.346 }, 00:14:41.346 { 00:14:41.346 "name": null, 00:14:41.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.346 "is_configured": false, 00:14:41.346 "data_offset": 2048, 00:14:41.346 "data_size": 63488 00:14:41.346 }, 00:14:41.346 { 00:14:41.346 "name": null, 00:14:41.346 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.346 "is_configured": false, 00:14:41.346 "data_offset": 2048, 00:14:41.346 "data_size": 63488 00:14:41.346 } 00:14:41.346 ] 00:14:41.346 }' 00:14:41.346 13:14:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.346 13:14:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.913 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:14:41.913 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:41.913 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.172 [2024-07-26 13:14:22.500287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.172 [2024-07-26 13:14:22.500331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.172 [2024-07-26 13:14:22.500349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12ecf30 00:14:42.172 [2024-07-26 13:14:22.500360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.172 [2024-07-26 13:14:22.500667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.172 [2024-07-26 13:14:22.500683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.172 [2024-07-26 13:14:22.500738] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:42.172 [2024-07-26 13:14:22.500755] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.172 pt2 00:14:42.172 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:42.172 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:42.172 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:42.430 [2024-07-26 13:14:22.728885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:42.430 [2024-07-26 13:14:22.728918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.430 [2024-07-26 13:14:22.728933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12ebef0 00:14:42.430 [2024-07-26 13:14:22.728944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.430 [2024-07-26 13:14:22.729229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.430 [2024-07-26 13:14:22.729245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:42.430 [2024-07-26 13:14:22.729294] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:42.430 [2024-07-26 13:14:22.729311] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:42.430 [2024-07-26 13:14:22.729410] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x12e9c20 00:14:42.430 [2024-07-26 13:14:22.729425] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:42.430 [2024-07-26 13:14:22.729582] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12f3f40 00:14:42.430 [2024-07-26 13:14:22.729698] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12e9c20 00:14:42.430 [2024-07-26 13:14:22.729707] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12e9c20 00:14:42.430 [2024-07-26 13:14:22.729794] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.430 pt3 00:14:42.430 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:42.430 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:42.430 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:42.430 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:42.430 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:42.431 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:42.431 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.431 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:42.431 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.431 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.431 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.431 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.431 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.431 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.690 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.690 "name": "raid_bdev1", 00:14:42.690 "uuid": "0018f975-e1a5-467f-9930-9bec550dc74a", 00:14:42.690 "strip_size_kb": 64, 00:14:42.690 "state": "online", 00:14:42.690 "raid_level": "raid0", 00:14:42.690 "superblock": true, 00:14:42.690 "num_base_bdevs": 3, 00:14:42.690 "num_base_bdevs_discovered": 3, 00:14:42.690 "num_base_bdevs_operational": 3, 00:14:42.690 "base_bdevs_list": [ 00:14:42.690 { 00:14:42.690 "name": "pt1", 00:14:42.690 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.690 "is_configured": true, 00:14:42.690 "data_offset": 2048, 00:14:42.691 "data_size": 63488 00:14:42.691 }, 00:14:42.691 { 00:14:42.691 "name": "pt2", 00:14:42.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.691 "is_configured": true, 00:14:42.691 "data_offset": 2048, 00:14:42.691 "data_size": 63488 00:14:42.691 }, 00:14:42.691 { 00:14:42.691 "name": "pt3", 00:14:42.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.691 "is_configured": true, 00:14:42.691 "data_offset": 2048, 00:14:42.691 "data_size": 63488 00:14:42.691 } 00:14:42.691 ] 00:14:42.691 }' 00:14:42.691 13:14:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.691 13:14:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.258 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:14:43.258 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:43.258 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:43.258 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:43.258 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:43.258 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:43.258 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:43.258 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:43.258 [2024-07-26 13:14:23.779891] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.517 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:43.517 "name": "raid_bdev1", 00:14:43.517 "aliases": [ 00:14:43.517 "0018f975-e1a5-467f-9930-9bec550dc74a" 00:14:43.517 ], 00:14:43.517 "product_name": "Raid Volume", 00:14:43.517 "block_size": 512, 00:14:43.517 "num_blocks": 190464, 00:14:43.517 "uuid": "0018f975-e1a5-467f-9930-9bec550dc74a", 00:14:43.517 "assigned_rate_limits": { 00:14:43.517 "rw_ios_per_sec": 0, 00:14:43.517 "rw_mbytes_per_sec": 0, 00:14:43.517 "r_mbytes_per_sec": 0, 00:14:43.517 "w_mbytes_per_sec": 0 00:14:43.517 }, 00:14:43.517 "claimed": false, 00:14:43.517 "zoned": false, 00:14:43.517 "supported_io_types": { 00:14:43.517 "read": true, 00:14:43.517 "write": true, 00:14:43.517 "unmap": true, 00:14:43.517 "flush": true, 00:14:43.517 "reset": true, 00:14:43.517 "nvme_admin": false, 00:14:43.517 "nvme_io": false, 00:14:43.517 "nvme_io_md": false, 00:14:43.517 "write_zeroes": true, 00:14:43.517 "zcopy": false, 00:14:43.517 "get_zone_info": false, 00:14:43.517 "zone_management": false, 00:14:43.517 "zone_append": false, 00:14:43.517 "compare": false, 00:14:43.517 "compare_and_write": false, 00:14:43.517 "abort": false, 00:14:43.517 "seek_hole": false, 00:14:43.517 "seek_data": false, 00:14:43.517 "copy": false, 00:14:43.517 "nvme_iov_md": false 00:14:43.517 }, 00:14:43.517 "memory_domains": [ 00:14:43.517 { 00:14:43.517 "dma_device_id": "system", 00:14:43.517 "dma_device_type": 1 00:14:43.517 }, 00:14:43.517 { 00:14:43.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.517 "dma_device_type": 2 00:14:43.517 }, 00:14:43.517 { 00:14:43.517 "dma_device_id": "system", 00:14:43.517 "dma_device_type": 1 00:14:43.517 }, 00:14:43.517 { 00:14:43.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.517 "dma_device_type": 2 00:14:43.517 }, 00:14:43.517 { 00:14:43.517 "dma_device_id": "system", 00:14:43.517 "dma_device_type": 1 00:14:43.517 }, 00:14:43.517 { 00:14:43.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.517 "dma_device_type": 2 00:14:43.517 } 00:14:43.517 ], 00:14:43.517 "driver_specific": { 00:14:43.517 "raid": { 00:14:43.517 "uuid": "0018f975-e1a5-467f-9930-9bec550dc74a", 00:14:43.517 "strip_size_kb": 64, 00:14:43.517 "state": "online", 00:14:43.517 "raid_level": "raid0", 00:14:43.517 "superblock": true, 00:14:43.517 "num_base_bdevs": 3, 00:14:43.517 "num_base_bdevs_discovered": 3, 00:14:43.517 "num_base_bdevs_operational": 3, 00:14:43.517 "base_bdevs_list": [ 00:14:43.517 { 00:14:43.517 "name": "pt1", 00:14:43.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.517 "is_configured": true, 00:14:43.517 "data_offset": 2048, 00:14:43.517 "data_size": 63488 00:14:43.517 }, 00:14:43.517 { 00:14:43.517 "name": "pt2", 00:14:43.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.517 "is_configured": true, 00:14:43.517 "data_offset": 2048, 00:14:43.517 "data_size": 63488 00:14:43.517 }, 00:14:43.517 { 00:14:43.517 "name": "pt3", 00:14:43.517 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.517 "is_configured": true, 00:14:43.517 "data_offset": 2048, 00:14:43.517 "data_size": 63488 00:14:43.517 } 00:14:43.517 ] 00:14:43.517 } 00:14:43.517 } 00:14:43.517 }' 00:14:43.517 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.517 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:43.517 pt2 00:14:43.517 pt3' 00:14:43.517 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:43.517 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:43.517 13:14:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:43.776 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:43.776 "name": "pt1", 00:14:43.776 "aliases": [ 00:14:43.776 "00000000-0000-0000-0000-000000000001" 00:14:43.776 ], 00:14:43.776 "product_name": "passthru", 00:14:43.776 "block_size": 512, 00:14:43.776 "num_blocks": 65536, 00:14:43.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:43.776 "assigned_rate_limits": { 00:14:43.776 "rw_ios_per_sec": 0, 00:14:43.776 "rw_mbytes_per_sec": 0, 00:14:43.776 "r_mbytes_per_sec": 0, 00:14:43.776 "w_mbytes_per_sec": 0 00:14:43.776 }, 00:14:43.776 "claimed": true, 00:14:43.776 "claim_type": "exclusive_write", 00:14:43.776 "zoned": false, 00:14:43.776 "supported_io_types": { 00:14:43.776 "read": true, 00:14:43.776 "write": true, 00:14:43.776 "unmap": true, 00:14:43.776 "flush": true, 00:14:43.776 "reset": true, 00:14:43.776 "nvme_admin": false, 00:14:43.776 "nvme_io": false, 00:14:43.776 "nvme_io_md": false, 00:14:43.776 "write_zeroes": true, 00:14:43.776 "zcopy": true, 00:14:43.776 "get_zone_info": false, 00:14:43.776 "zone_management": false, 00:14:43.776 "zone_append": false, 00:14:43.776 "compare": false, 00:14:43.776 "compare_and_write": false, 00:14:43.776 "abort": true, 00:14:43.776 "seek_hole": false, 00:14:43.776 "seek_data": false, 00:14:43.776 "copy": true, 00:14:43.776 "nvme_iov_md": false 00:14:43.776 }, 00:14:43.776 "memory_domains": [ 00:14:43.776 { 00:14:43.776 "dma_device_id": "system", 00:14:43.776 "dma_device_type": 1 00:14:43.776 }, 00:14:43.776 { 00:14:43.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.776 "dma_device_type": 2 00:14:43.776 } 00:14:43.776 ], 00:14:43.776 "driver_specific": { 00:14:43.776 "passthru": { 00:14:43.776 "name": "pt1", 00:14:43.776 "base_bdev_name": "malloc1" 00:14:43.776 } 00:14:43.776 } 00:14:43.776 }' 00:14:43.777 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:43.777 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:43.777 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:43.777 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:43.777 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:43.777 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:43.777 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:43.777 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.036 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.036 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.036 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.036 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.036 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.036 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:44.036 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.295 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.295 "name": "pt2", 00:14:44.295 "aliases": [ 00:14:44.295 "00000000-0000-0000-0000-000000000002" 00:14:44.295 ], 00:14:44.295 "product_name": "passthru", 00:14:44.295 "block_size": 512, 00:14:44.295 "num_blocks": 65536, 00:14:44.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.295 "assigned_rate_limits": { 00:14:44.295 "rw_ios_per_sec": 0, 00:14:44.295 "rw_mbytes_per_sec": 0, 00:14:44.295 "r_mbytes_per_sec": 0, 00:14:44.295 "w_mbytes_per_sec": 0 00:14:44.295 }, 00:14:44.295 "claimed": true, 00:14:44.295 "claim_type": "exclusive_write", 00:14:44.295 "zoned": false, 00:14:44.295 "supported_io_types": { 00:14:44.295 "read": true, 00:14:44.295 "write": true, 00:14:44.295 "unmap": true, 00:14:44.295 "flush": true, 00:14:44.295 "reset": true, 00:14:44.295 "nvme_admin": false, 00:14:44.295 "nvme_io": false, 00:14:44.295 "nvme_io_md": false, 00:14:44.295 "write_zeroes": true, 00:14:44.295 "zcopy": true, 00:14:44.295 "get_zone_info": false, 00:14:44.295 "zone_management": false, 00:14:44.295 "zone_append": false, 00:14:44.295 "compare": false, 00:14:44.295 "compare_and_write": false, 00:14:44.295 "abort": true, 00:14:44.295 "seek_hole": false, 00:14:44.295 "seek_data": false, 00:14:44.295 "copy": true, 00:14:44.295 "nvme_iov_md": false 00:14:44.295 }, 00:14:44.295 "memory_domains": [ 00:14:44.295 { 00:14:44.295 "dma_device_id": "system", 00:14:44.295 "dma_device_type": 1 00:14:44.295 }, 00:14:44.295 { 00:14:44.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.295 "dma_device_type": 2 00:14:44.295 } 00:14:44.295 ], 00:14:44.295 "driver_specific": { 00:14:44.295 "passthru": { 00:14:44.295 "name": "pt2", 00:14:44.295 "base_bdev_name": "malloc2" 00:14:44.295 } 00:14:44.295 } 00:14:44.295 }' 00:14:44.295 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.295 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.295 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:44.295 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.295 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.295 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:44.295 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.555 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.555 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.555 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.555 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.555 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.555 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.555 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:14:44.555 13:14:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.814 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.814 "name": "pt3", 00:14:44.814 "aliases": [ 00:14:44.814 "00000000-0000-0000-0000-000000000003" 00:14:44.814 ], 00:14:44.814 "product_name": "passthru", 00:14:44.814 "block_size": 512, 00:14:44.814 "num_blocks": 65536, 00:14:44.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.814 "assigned_rate_limits": { 00:14:44.814 "rw_ios_per_sec": 0, 00:14:44.814 "rw_mbytes_per_sec": 0, 00:14:44.814 "r_mbytes_per_sec": 0, 00:14:44.814 "w_mbytes_per_sec": 0 00:14:44.814 }, 00:14:44.814 "claimed": true, 00:14:44.814 "claim_type": "exclusive_write", 00:14:44.814 "zoned": false, 00:14:44.814 "supported_io_types": { 00:14:44.814 "read": true, 00:14:44.814 "write": true, 00:14:44.814 "unmap": true, 00:14:44.814 "flush": true, 00:14:44.814 "reset": true, 00:14:44.814 "nvme_admin": false, 00:14:44.814 "nvme_io": false, 00:14:44.814 "nvme_io_md": false, 00:14:44.814 "write_zeroes": true, 00:14:44.814 "zcopy": true, 00:14:44.814 "get_zone_info": false, 00:14:44.814 "zone_management": false, 00:14:44.814 "zone_append": false, 00:14:44.814 "compare": false, 00:14:44.814 "compare_and_write": false, 00:14:44.814 "abort": true, 00:14:44.814 "seek_hole": false, 00:14:44.814 "seek_data": false, 00:14:44.814 "copy": true, 00:14:44.814 "nvme_iov_md": false 00:14:44.814 }, 00:14:44.814 "memory_domains": [ 00:14:44.814 { 00:14:44.814 "dma_device_id": "system", 00:14:44.814 "dma_device_type": 1 00:14:44.814 }, 00:14:44.814 { 00:14:44.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.814 "dma_device_type": 2 00:14:44.814 } 00:14:44.814 ], 00:14:44.814 "driver_specific": { 00:14:44.814 "passthru": { 00:14:44.814 "name": "pt3", 00:14:44.814 "base_bdev_name": "malloc3" 00:14:44.814 } 00:14:44.814 } 00:14:44.814 }' 00:14:44.814 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.814 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.814 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:44.814 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.073 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:45.073 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:45.073 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.073 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:45.073 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:45.073 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.073 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:45.073 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:45.073 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:45.073 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:14:45.333 [2024-07-26 13:14:25.753222] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 0018f975-e1a5-467f-9930-9bec550dc74a '!=' 0018f975-e1a5-467f-9930-9bec550dc74a ']' 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 689389 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 689389 ']' 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 689389 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 689389 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 689389' 00:14:45.333 killing process with pid 689389 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 689389 00:14:45.333 [2024-07-26 13:14:25.820358] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.333 [2024-07-26 13:14:25.820408] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.333 [2024-07-26 13:14:25.820454] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.333 [2024-07-26 13:14:25.820465] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12e9c20 name raid_bdev1, state offline 00:14:45.333 13:14:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 689389 00:14:45.333 [2024-07-26 13:14:25.843345] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:45.592 13:14:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:14:45.592 00:14:45.592 real 0m13.269s 00:14:45.592 user 0m23.964s 00:14:45.592 sys 0m2.355s 00:14:45.592 13:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:45.592 13:14:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.592 ************************************ 00:14:45.592 END TEST raid_superblock_test 00:14:45.592 ************************************ 00:14:45.592 13:14:26 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:14:45.592 13:14:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:45.592 13:14:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:45.592 13:14:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:45.592 ************************************ 00:14:45.592 START TEST raid_read_error_test 00:14:45.592 ************************************ 00:14:45.593 13:14:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:14:45.593 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:14:45.593 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:14:45.593 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.cinkZoo5em 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=691801 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 691801 /var/tmp/spdk-raid.sock 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 691801 ']' 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:45.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:45.852 13:14:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.852 [2024-07-26 13:14:26.189964] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:14:45.852 [2024-07-26 13:14:26.190019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid691801 ] 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:45.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.852 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:45.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.853 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:45.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.853 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:45.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.853 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:45.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.853 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:45.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.853 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:45.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.853 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:45.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.853 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:45.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.853 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:45.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.853 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:45.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.853 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:45.853 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:45.853 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:45.853 [2024-07-26 13:14:26.322385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.112 [2024-07-26 13:14:26.411155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.112 [2024-07-26 13:14:26.472938] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.112 [2024-07-26 13:14:26.472973] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.680 13:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:46.680 13:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:46.680 13:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:46.680 13:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:46.939 BaseBdev1_malloc 00:14:46.939 13:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:47.198 true 00:14:47.198 13:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:47.457 [2024-07-26 13:14:27.743479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:47.457 [2024-07-26 13:14:27.743518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.457 [2024-07-26 13:14:27.743535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1994190 00:14:47.457 [2024-07-26 13:14:27.743546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.457 [2024-07-26 13:14:27.745082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.457 [2024-07-26 13:14:27.745109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:47.457 BaseBdev1 00:14:47.457 13:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:47.457 13:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:47.457 BaseBdev2_malloc 00:14:47.716 13:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:47.716 true 00:14:47.716 13:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:47.975 [2024-07-26 13:14:28.421530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:47.975 [2024-07-26 13:14:28.421567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.975 [2024-07-26 13:14:28.421584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1998e20 00:14:47.975 [2024-07-26 13:14:28.421596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.975 [2024-07-26 13:14:28.422923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.975 [2024-07-26 13:14:28.422948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:47.975 BaseBdev2 00:14:47.975 13:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:47.975 13:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:48.235 BaseBdev3_malloc 00:14:48.235 13:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:48.493 true 00:14:48.494 13:14:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:48.752 [2024-07-26 13:14:29.107656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:48.752 [2024-07-26 13:14:29.107697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.752 [2024-07-26 13:14:29.107717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1999d90 00:14:48.752 [2024-07-26 13:14:29.107729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.752 [2024-07-26 13:14:29.109105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.752 [2024-07-26 13:14:29.109131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:48.752 BaseBdev3 00:14:48.752 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:49.011 [2024-07-26 13:14:29.328262] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.011 [2024-07-26 13:14:29.329430] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.011 [2024-07-26 13:14:29.329493] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:49.011 [2024-07-26 13:14:29.329666] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x199bba0 00:14:49.011 [2024-07-26 13:14:29.329677] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:49.011 [2024-07-26 13:14:29.329862] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x199fef0 00:14:49.011 [2024-07-26 13:14:29.329999] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x199bba0 00:14:49.011 [2024-07-26 13:14:29.330008] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x199bba0 00:14:49.011 [2024-07-26 13:14:29.330116] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.011 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:49.011 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:49.011 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:49.011 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:49.011 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.011 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:49.011 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.011 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.011 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.011 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.011 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.011 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.270 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.270 "name": "raid_bdev1", 00:14:49.270 "uuid": "5d196001-0671-4ff9-98a4-934bd6212fbb", 00:14:49.270 "strip_size_kb": 64, 00:14:49.270 "state": "online", 00:14:49.270 "raid_level": "raid0", 00:14:49.270 "superblock": true, 00:14:49.270 "num_base_bdevs": 3, 00:14:49.270 "num_base_bdevs_discovered": 3, 00:14:49.270 "num_base_bdevs_operational": 3, 00:14:49.270 "base_bdevs_list": [ 00:14:49.270 { 00:14:49.270 "name": "BaseBdev1", 00:14:49.270 "uuid": "e64600fb-0090-5aad-89ae-f9262e7b8d17", 00:14:49.270 "is_configured": true, 00:14:49.270 "data_offset": 2048, 00:14:49.270 "data_size": 63488 00:14:49.270 }, 00:14:49.270 { 00:14:49.270 "name": "BaseBdev2", 00:14:49.270 "uuid": "8bbb7561-d96e-5a01-a368-2cacfba97136", 00:14:49.270 "is_configured": true, 00:14:49.270 "data_offset": 2048, 00:14:49.270 "data_size": 63488 00:14:49.270 }, 00:14:49.270 { 00:14:49.270 "name": "BaseBdev3", 00:14:49.270 "uuid": "f13c35aa-3d21-591c-bcc5-b5f7808dc91c", 00:14:49.270 "is_configured": true, 00:14:49.270 "data_offset": 2048, 00:14:49.270 "data_size": 63488 00:14:49.270 } 00:14:49.270 ] 00:14:49.270 }' 00:14:49.270 13:14:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.270 13:14:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.836 13:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:49.836 13:14:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:49.836 [2024-07-26 13:14:30.242908] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x199cf70 00:14:50.774 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.034 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.294 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.294 "name": "raid_bdev1", 00:14:51.294 "uuid": "5d196001-0671-4ff9-98a4-934bd6212fbb", 00:14:51.294 "strip_size_kb": 64, 00:14:51.294 "state": "online", 00:14:51.294 "raid_level": "raid0", 00:14:51.294 "superblock": true, 00:14:51.294 "num_base_bdevs": 3, 00:14:51.294 "num_base_bdevs_discovered": 3, 00:14:51.294 "num_base_bdevs_operational": 3, 00:14:51.294 "base_bdevs_list": [ 00:14:51.294 { 00:14:51.294 "name": "BaseBdev1", 00:14:51.294 "uuid": "e64600fb-0090-5aad-89ae-f9262e7b8d17", 00:14:51.294 "is_configured": true, 00:14:51.294 "data_offset": 2048, 00:14:51.294 "data_size": 63488 00:14:51.294 }, 00:14:51.294 { 00:14:51.294 "name": "BaseBdev2", 00:14:51.294 "uuid": "8bbb7561-d96e-5a01-a368-2cacfba97136", 00:14:51.294 "is_configured": true, 00:14:51.294 "data_offset": 2048, 00:14:51.294 "data_size": 63488 00:14:51.294 }, 00:14:51.294 { 00:14:51.294 "name": "BaseBdev3", 00:14:51.294 "uuid": "f13c35aa-3d21-591c-bcc5-b5f7808dc91c", 00:14:51.294 "is_configured": true, 00:14:51.294 "data_offset": 2048, 00:14:51.294 "data_size": 63488 00:14:51.294 } 00:14:51.294 ] 00:14:51.294 }' 00:14:51.294 13:14:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.294 13:14:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.863 13:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:52.122 [2024-07-26 13:14:32.421846] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:52.122 [2024-07-26 13:14:32.421881] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.122 [2024-07-26 13:14:32.424792] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.122 [2024-07-26 13:14:32.424826] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.122 [2024-07-26 13:14:32.424857] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.122 [2024-07-26 13:14:32.424867] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x199bba0 name raid_bdev1, state offline 00:14:52.122 0 00:14:52.122 13:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 691801 00:14:52.122 13:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 691801 ']' 00:14:52.122 13:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 691801 00:14:52.122 13:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:14:52.122 13:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:52.122 13:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 691801 00:14:52.122 13:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:52.122 13:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:52.122 13:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 691801' 00:14:52.122 killing process with pid 691801 00:14:52.122 13:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 691801 00:14:52.122 [2024-07-26 13:14:32.495220] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.122 13:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 691801 00:14:52.122 [2024-07-26 13:14:32.513196] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.382 13:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.cinkZoo5em 00:14:52.382 13:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:52.382 13:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:52.382 13:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:14:52.382 13:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:14:52.382 13:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:52.382 13:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:52.382 13:14:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:14:52.382 00:14:52.382 real 0m6.600s 00:14:52.382 user 0m10.380s 00:14:52.382 sys 0m1.145s 00:14:52.382 13:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:52.382 13:14:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.382 ************************************ 00:14:52.382 END TEST raid_read_error_test 00:14:52.382 ************************************ 00:14:52.382 13:14:32 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:14:52.382 13:14:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:52.382 13:14:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:52.382 13:14:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:52.382 ************************************ 00:14:52.382 START TEST raid_write_error_test 00:14:52.382 ************************************ 00:14:52.382 13:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:14:52.382 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:14:52.382 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:14:52.382 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:14:52.382 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:52.382 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:52.382 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:52.382 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:52.382 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.k57rJDDxLr 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=693109 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 693109 /var/tmp/spdk-raid.sock 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 693109 ']' 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:52.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:52.383 13:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.383 [2024-07-26 13:14:32.881368] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:14:52.383 [2024-07-26 13:14:32.881425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid693109 ] 00:14:52.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.655 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:52.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.655 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:52.655 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:52.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:52.656 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:52.656 [2024-07-26 13:14:33.013152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.656 [2024-07-26 13:14:33.099611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.656 [2024-07-26 13:14:33.154555] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.656 [2024-07-26 13:14:33.154581] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.608 13:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:53.608 13:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:53.608 13:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:53.608 13:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:53.608 BaseBdev1_malloc 00:14:53.608 13:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:53.867 true 00:14:53.867 13:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:54.126 [2024-07-26 13:14:34.407052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:54.126 [2024-07-26 13:14:34.407089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.126 [2024-07-26 13:14:34.407106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x986190 00:14:54.126 [2024-07-26 13:14:34.407118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.126 [2024-07-26 13:14:34.408651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.126 [2024-07-26 13:14:34.408677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:54.126 BaseBdev1 00:14:54.126 13:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:54.126 13:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:54.126 BaseBdev2_malloc 00:14:54.385 13:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:54.385 true 00:14:54.385 13:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:54.644 [2024-07-26 13:14:35.101321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:54.644 [2024-07-26 13:14:35.101363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.644 [2024-07-26 13:14:35.101381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x98ae20 00:14:54.644 [2024-07-26 13:14:35.101392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.644 [2024-07-26 13:14:35.102744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.644 [2024-07-26 13:14:35.102770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:54.644 BaseBdev2 00:14:54.644 13:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:54.644 13:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:54.903 BaseBdev3_malloc 00:14:54.903 13:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:55.161 true 00:14:55.161 13:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:55.420 [2024-07-26 13:14:35.803330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:55.420 [2024-07-26 13:14:35.803371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.420 [2024-07-26 13:14:35.803391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x98bd90 00:14:55.420 [2024-07-26 13:14:35.803403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.420 [2024-07-26 13:14:35.804783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.420 [2024-07-26 13:14:35.804809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:55.420 BaseBdev3 00:14:55.420 13:14:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:14:55.678 [2024-07-26 13:14:36.027954] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.678 [2024-07-26 13:14:36.029133] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.678 [2024-07-26 13:14:36.029206] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.678 [2024-07-26 13:14:36.029379] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x98dba0 00:14:55.678 [2024-07-26 13:14:36.029389] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:55.678 [2024-07-26 13:14:36.029569] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x991ef0 00:14:55.678 [2024-07-26 13:14:36.029703] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x98dba0 00:14:55.678 [2024-07-26 13:14:36.029713] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x98dba0 00:14:55.678 [2024-07-26 13:14:36.029820] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.678 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:55.678 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:55.678 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:55.679 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:55.679 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.679 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:55.679 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.679 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.679 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.679 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.679 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.679 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.937 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.937 "name": "raid_bdev1", 00:14:55.937 "uuid": "e6f84d7f-c875-461b-b14d-de070f8e4e3d", 00:14:55.937 "strip_size_kb": 64, 00:14:55.937 "state": "online", 00:14:55.937 "raid_level": "raid0", 00:14:55.937 "superblock": true, 00:14:55.937 "num_base_bdevs": 3, 00:14:55.937 "num_base_bdevs_discovered": 3, 00:14:55.937 "num_base_bdevs_operational": 3, 00:14:55.937 "base_bdevs_list": [ 00:14:55.937 { 00:14:55.937 "name": "BaseBdev1", 00:14:55.938 "uuid": "5c84c8f4-8985-587f-a5ab-99ce014b473f", 00:14:55.938 "is_configured": true, 00:14:55.938 "data_offset": 2048, 00:14:55.938 "data_size": 63488 00:14:55.938 }, 00:14:55.938 { 00:14:55.938 "name": "BaseBdev2", 00:14:55.938 "uuid": "d4082bea-40ae-583c-8187-0b801c2839ac", 00:14:55.938 "is_configured": true, 00:14:55.938 "data_offset": 2048, 00:14:55.938 "data_size": 63488 00:14:55.938 }, 00:14:55.938 { 00:14:55.938 "name": "BaseBdev3", 00:14:55.938 "uuid": "c3cfaa07-8557-5a41-852f-2b5e8d47ad19", 00:14:55.938 "is_configured": true, 00:14:55.938 "data_offset": 2048, 00:14:55.938 "data_size": 63488 00:14:55.938 } 00:14:55.938 ] 00:14:55.938 }' 00:14:55.938 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.938 13:14:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.505 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:14:56.506 13:14:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:56.506 [2024-07-26 13:14:36.970672] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x98ef70 00:14:57.442 13:14:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.701 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.961 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.961 "name": "raid_bdev1", 00:14:57.961 "uuid": "e6f84d7f-c875-461b-b14d-de070f8e4e3d", 00:14:57.961 "strip_size_kb": 64, 00:14:57.961 "state": "online", 00:14:57.961 "raid_level": "raid0", 00:14:57.961 "superblock": true, 00:14:57.961 "num_base_bdevs": 3, 00:14:57.961 "num_base_bdevs_discovered": 3, 00:14:57.961 "num_base_bdevs_operational": 3, 00:14:57.961 "base_bdevs_list": [ 00:14:57.961 { 00:14:57.961 "name": "BaseBdev1", 00:14:57.961 "uuid": "5c84c8f4-8985-587f-a5ab-99ce014b473f", 00:14:57.961 "is_configured": true, 00:14:57.961 "data_offset": 2048, 00:14:57.961 "data_size": 63488 00:14:57.961 }, 00:14:57.961 { 00:14:57.961 "name": "BaseBdev2", 00:14:57.961 "uuid": "d4082bea-40ae-583c-8187-0b801c2839ac", 00:14:57.961 "is_configured": true, 00:14:57.961 "data_offset": 2048, 00:14:57.961 "data_size": 63488 00:14:57.961 }, 00:14:57.961 { 00:14:57.961 "name": "BaseBdev3", 00:14:57.961 "uuid": "c3cfaa07-8557-5a41-852f-2b5e8d47ad19", 00:14:57.961 "is_configured": true, 00:14:57.961 "data_offset": 2048, 00:14:57.961 "data_size": 63488 00:14:57.961 } 00:14:57.961 ] 00:14:57.961 }' 00:14:57.961 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.961 13:14:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.647 13:14:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:58.647 [2024-07-26 13:14:39.100937] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.647 [2024-07-26 13:14:39.100968] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.647 [2024-07-26 13:14:39.103889] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.647 [2024-07-26 13:14:39.103929] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.647 [2024-07-26 13:14:39.103958] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.647 [2024-07-26 13:14:39.103969] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x98dba0 name raid_bdev1, state offline 00:14:58.647 0 00:14:58.647 13:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 693109 00:14:58.647 13:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 693109 ']' 00:14:58.647 13:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 693109 00:14:58.647 13:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:14:58.647 13:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:58.647 13:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 693109 00:14:58.906 13:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 693109' 00:14:58.907 killing process with pid 693109 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 693109 00:14:58.907 [2024-07-26 13:14:39.177946] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 693109 00:14:58.907 [2024-07-26 13:14:39.196444] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.k57rJDDxLr 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:14:58.907 00:14:58.907 real 0m6.596s 00:14:58.907 user 0m10.398s 00:14:58.907 sys 0m1.148s 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.907 13:14:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.907 ************************************ 00:14:58.907 END TEST raid_write_error_test 00:14:58.907 ************************************ 00:14:59.166 13:14:39 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:14:59.166 13:14:39 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:14:59.166 13:14:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:59.166 13:14:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:59.166 13:14:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:59.166 ************************************ 00:14:59.166 START TEST raid_state_function_test 00:14:59.166 ************************************ 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=694371 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 694371' 00:14:59.167 Process raid pid: 694371 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 694371 /var/tmp/spdk-raid.sock 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 694371 ']' 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:59.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:59.167 13:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.167 [2024-07-26 13:14:39.556656] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:14:59.167 [2024-07-26 13:14:39.556713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:59.167 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:59.167 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:59.167 [2024-07-26 13:14:39.689448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.427 [2024-07-26 13:14:39.776823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.427 [2024-07-26 13:14:39.834759] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.427 [2024-07-26 13:14:39.834785] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.995 13:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.995 13:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:59.995 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:00.254 [2024-07-26 13:14:40.664558] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.254 [2024-07-26 13:14:40.664602] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.254 [2024-07-26 13:14:40.664615] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.254 [2024-07-26 13:14:40.664627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.254 [2024-07-26 13:14:40.664635] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.254 [2024-07-26 13:14:40.664645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.254 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:00.255 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:00.255 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:00.255 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:00.255 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:00.255 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:00.255 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.255 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.255 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.255 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.255 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.255 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.514 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.514 "name": "Existed_Raid", 00:15:00.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.514 "strip_size_kb": 64, 00:15:00.514 "state": "configuring", 00:15:00.514 "raid_level": "concat", 00:15:00.514 "superblock": false, 00:15:00.514 "num_base_bdevs": 3, 00:15:00.514 "num_base_bdevs_discovered": 0, 00:15:00.514 "num_base_bdevs_operational": 3, 00:15:00.514 "base_bdevs_list": [ 00:15:00.514 { 00:15:00.514 "name": "BaseBdev1", 00:15:00.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.514 "is_configured": false, 00:15:00.514 "data_offset": 0, 00:15:00.514 "data_size": 0 00:15:00.514 }, 00:15:00.514 { 00:15:00.514 "name": "BaseBdev2", 00:15:00.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.514 "is_configured": false, 00:15:00.514 "data_offset": 0, 00:15:00.514 "data_size": 0 00:15:00.514 }, 00:15:00.514 { 00:15:00.514 "name": "BaseBdev3", 00:15:00.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.514 "is_configured": false, 00:15:00.514 "data_offset": 0, 00:15:00.514 "data_size": 0 00:15:00.514 } 00:15:00.514 ] 00:15:00.514 }' 00:15:00.514 13:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.514 13:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.081 13:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:01.340 [2024-07-26 13:14:41.651005] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.340 [2024-07-26 13:14:41.651039] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2326f40 name Existed_Raid, state configuring 00:15:01.340 13:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:01.599 [2024-07-26 13:14:41.879628] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.599 [2024-07-26 13:14:41.879657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.599 [2024-07-26 13:14:41.879666] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.599 [2024-07-26 13:14:41.879676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.599 [2024-07-26 13:14:41.879688] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:01.599 [2024-07-26 13:14:41.879698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:01.599 13:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:01.599 [2024-07-26 13:14:42.109738] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.599 BaseBdev1 00:15:01.858 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:01.858 13:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:01.858 13:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:01.858 13:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:01.858 13:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:01.858 13:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:01.858 13:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:01.858 13:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:02.117 [ 00:15:02.117 { 00:15:02.117 "name": "BaseBdev1", 00:15:02.117 "aliases": [ 00:15:02.117 "d91a38f3-090d-42f8-8668-69618fa9afa2" 00:15:02.117 ], 00:15:02.117 "product_name": "Malloc disk", 00:15:02.117 "block_size": 512, 00:15:02.117 "num_blocks": 65536, 00:15:02.117 "uuid": "d91a38f3-090d-42f8-8668-69618fa9afa2", 00:15:02.117 "assigned_rate_limits": { 00:15:02.117 "rw_ios_per_sec": 0, 00:15:02.117 "rw_mbytes_per_sec": 0, 00:15:02.117 "r_mbytes_per_sec": 0, 00:15:02.117 "w_mbytes_per_sec": 0 00:15:02.117 }, 00:15:02.117 "claimed": true, 00:15:02.117 "claim_type": "exclusive_write", 00:15:02.117 "zoned": false, 00:15:02.117 "supported_io_types": { 00:15:02.117 "read": true, 00:15:02.117 "write": true, 00:15:02.117 "unmap": true, 00:15:02.117 "flush": true, 00:15:02.117 "reset": true, 00:15:02.117 "nvme_admin": false, 00:15:02.117 "nvme_io": false, 00:15:02.117 "nvme_io_md": false, 00:15:02.117 "write_zeroes": true, 00:15:02.117 "zcopy": true, 00:15:02.117 "get_zone_info": false, 00:15:02.117 "zone_management": false, 00:15:02.117 "zone_append": false, 00:15:02.117 "compare": false, 00:15:02.117 "compare_and_write": false, 00:15:02.117 "abort": true, 00:15:02.117 "seek_hole": false, 00:15:02.117 "seek_data": false, 00:15:02.117 "copy": true, 00:15:02.117 "nvme_iov_md": false 00:15:02.117 }, 00:15:02.117 "memory_domains": [ 00:15:02.117 { 00:15:02.117 "dma_device_id": "system", 00:15:02.117 "dma_device_type": 1 00:15:02.117 }, 00:15:02.117 { 00:15:02.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.117 "dma_device_type": 2 00:15:02.117 } 00:15:02.117 ], 00:15:02.117 "driver_specific": {} 00:15:02.117 } 00:15:02.117 ] 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.117 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.376 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.376 "name": "Existed_Raid", 00:15:02.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.376 "strip_size_kb": 64, 00:15:02.376 "state": "configuring", 00:15:02.376 "raid_level": "concat", 00:15:02.376 "superblock": false, 00:15:02.376 "num_base_bdevs": 3, 00:15:02.376 "num_base_bdevs_discovered": 1, 00:15:02.376 "num_base_bdevs_operational": 3, 00:15:02.376 "base_bdevs_list": [ 00:15:02.376 { 00:15:02.376 "name": "BaseBdev1", 00:15:02.376 "uuid": "d91a38f3-090d-42f8-8668-69618fa9afa2", 00:15:02.376 "is_configured": true, 00:15:02.376 "data_offset": 0, 00:15:02.376 "data_size": 65536 00:15:02.376 }, 00:15:02.376 { 00:15:02.376 "name": "BaseBdev2", 00:15:02.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.376 "is_configured": false, 00:15:02.376 "data_offset": 0, 00:15:02.376 "data_size": 0 00:15:02.376 }, 00:15:02.376 { 00:15:02.376 "name": "BaseBdev3", 00:15:02.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.376 "is_configured": false, 00:15:02.376 "data_offset": 0, 00:15:02.376 "data_size": 0 00:15:02.376 } 00:15:02.376 ] 00:15:02.376 }' 00:15:02.376 13:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.376 13:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.950 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:03.209 [2024-07-26 13:14:43.561571] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.209 [2024-07-26 13:14:43.561609] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2326810 name Existed_Raid, state configuring 00:15:03.209 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:03.209 [2024-07-26 13:14:43.726041] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.209 [2024-07-26 13:14:43.727455] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:03.209 [2024-07-26 13:14:43.727490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:03.209 [2024-07-26 13:14:43.727499] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:03.209 [2024-07-26 13:14:43.727510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.467 "name": "Existed_Raid", 00:15:03.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.467 "strip_size_kb": 64, 00:15:03.467 "state": "configuring", 00:15:03.467 "raid_level": "concat", 00:15:03.467 "superblock": false, 00:15:03.467 "num_base_bdevs": 3, 00:15:03.467 "num_base_bdevs_discovered": 1, 00:15:03.467 "num_base_bdevs_operational": 3, 00:15:03.467 "base_bdevs_list": [ 00:15:03.467 { 00:15:03.467 "name": "BaseBdev1", 00:15:03.467 "uuid": "d91a38f3-090d-42f8-8668-69618fa9afa2", 00:15:03.467 "is_configured": true, 00:15:03.467 "data_offset": 0, 00:15:03.467 "data_size": 65536 00:15:03.467 }, 00:15:03.467 { 00:15:03.467 "name": "BaseBdev2", 00:15:03.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.467 "is_configured": false, 00:15:03.467 "data_offset": 0, 00:15:03.467 "data_size": 0 00:15:03.467 }, 00:15:03.467 { 00:15:03.467 "name": "BaseBdev3", 00:15:03.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.467 "is_configured": false, 00:15:03.467 "data_offset": 0, 00:15:03.467 "data_size": 0 00:15:03.467 } 00:15:03.467 ] 00:15:03.467 }' 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.467 13:14:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.034 13:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:04.294 [2024-07-26 13:14:44.764199] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.294 BaseBdev2 00:15:04.294 13:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:04.294 13:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:04.294 13:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:04.294 13:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:04.294 13:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:04.294 13:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:04.294 13:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:04.553 13:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:04.811 [ 00:15:04.811 { 00:15:04.811 "name": "BaseBdev2", 00:15:04.811 "aliases": [ 00:15:04.811 "3e0a2bb8-4f86-471a-a0c6-3879948cb579" 00:15:04.811 ], 00:15:04.811 "product_name": "Malloc disk", 00:15:04.811 "block_size": 512, 00:15:04.811 "num_blocks": 65536, 00:15:04.811 "uuid": "3e0a2bb8-4f86-471a-a0c6-3879948cb579", 00:15:04.811 "assigned_rate_limits": { 00:15:04.811 "rw_ios_per_sec": 0, 00:15:04.811 "rw_mbytes_per_sec": 0, 00:15:04.811 "r_mbytes_per_sec": 0, 00:15:04.811 "w_mbytes_per_sec": 0 00:15:04.811 }, 00:15:04.811 "claimed": true, 00:15:04.811 "claim_type": "exclusive_write", 00:15:04.811 "zoned": false, 00:15:04.811 "supported_io_types": { 00:15:04.811 "read": true, 00:15:04.811 "write": true, 00:15:04.811 "unmap": true, 00:15:04.811 "flush": true, 00:15:04.811 "reset": true, 00:15:04.811 "nvme_admin": false, 00:15:04.811 "nvme_io": false, 00:15:04.811 "nvme_io_md": false, 00:15:04.811 "write_zeroes": true, 00:15:04.811 "zcopy": true, 00:15:04.811 "get_zone_info": false, 00:15:04.811 "zone_management": false, 00:15:04.811 "zone_append": false, 00:15:04.811 "compare": false, 00:15:04.811 "compare_and_write": false, 00:15:04.811 "abort": true, 00:15:04.811 "seek_hole": false, 00:15:04.811 "seek_data": false, 00:15:04.811 "copy": true, 00:15:04.811 "nvme_iov_md": false 00:15:04.811 }, 00:15:04.811 "memory_domains": [ 00:15:04.811 { 00:15:04.811 "dma_device_id": "system", 00:15:04.811 "dma_device_type": 1 00:15:04.811 }, 00:15:04.811 { 00:15:04.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.811 "dma_device_type": 2 00:15:04.811 } 00:15:04.811 ], 00:15:04.811 "driver_specific": {} 00:15:04.811 } 00:15:04.811 ] 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.811 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.070 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.070 "name": "Existed_Raid", 00:15:05.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.070 "strip_size_kb": 64, 00:15:05.070 "state": "configuring", 00:15:05.070 "raid_level": "concat", 00:15:05.070 "superblock": false, 00:15:05.070 "num_base_bdevs": 3, 00:15:05.070 "num_base_bdevs_discovered": 2, 00:15:05.070 "num_base_bdevs_operational": 3, 00:15:05.070 "base_bdevs_list": [ 00:15:05.070 { 00:15:05.070 "name": "BaseBdev1", 00:15:05.070 "uuid": "d91a38f3-090d-42f8-8668-69618fa9afa2", 00:15:05.070 "is_configured": true, 00:15:05.070 "data_offset": 0, 00:15:05.070 "data_size": 65536 00:15:05.070 }, 00:15:05.070 { 00:15:05.070 "name": "BaseBdev2", 00:15:05.070 "uuid": "3e0a2bb8-4f86-471a-a0c6-3879948cb579", 00:15:05.070 "is_configured": true, 00:15:05.070 "data_offset": 0, 00:15:05.070 "data_size": 65536 00:15:05.070 }, 00:15:05.070 { 00:15:05.070 "name": "BaseBdev3", 00:15:05.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.070 "is_configured": false, 00:15:05.070 "data_offset": 0, 00:15:05.070 "data_size": 0 00:15:05.070 } 00:15:05.070 ] 00:15:05.070 }' 00:15:05.070 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.070 13:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.636 13:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:05.896 [2024-07-26 13:14:46.203260] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:05.896 [2024-07-26 13:14:46.203302] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2327710 00:15:05.896 [2024-07-26 13:14:46.203310] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:05.896 [2024-07-26 13:14:46.203493] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23273e0 00:15:05.896 [2024-07-26 13:14:46.203606] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2327710 00:15:05.896 [2024-07-26 13:14:46.203616] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2327710 00:15:05.896 [2024-07-26 13:14:46.203764] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.896 BaseBdev3 00:15:05.896 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:05.896 13:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:05.896 13:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:05.896 13:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:05.896 13:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:05.896 13:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:05.896 13:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:06.170 13:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:06.170 [ 00:15:06.170 { 00:15:06.170 "name": "BaseBdev3", 00:15:06.170 "aliases": [ 00:15:06.170 "66f2ea49-158a-4538-aa60-4b44d94402ad" 00:15:06.170 ], 00:15:06.170 "product_name": "Malloc disk", 00:15:06.170 "block_size": 512, 00:15:06.170 "num_blocks": 65536, 00:15:06.170 "uuid": "66f2ea49-158a-4538-aa60-4b44d94402ad", 00:15:06.170 "assigned_rate_limits": { 00:15:06.171 "rw_ios_per_sec": 0, 00:15:06.171 "rw_mbytes_per_sec": 0, 00:15:06.171 "r_mbytes_per_sec": 0, 00:15:06.171 "w_mbytes_per_sec": 0 00:15:06.171 }, 00:15:06.171 "claimed": true, 00:15:06.171 "claim_type": "exclusive_write", 00:15:06.171 "zoned": false, 00:15:06.171 "supported_io_types": { 00:15:06.171 "read": true, 00:15:06.171 "write": true, 00:15:06.171 "unmap": true, 00:15:06.171 "flush": true, 00:15:06.171 "reset": true, 00:15:06.171 "nvme_admin": false, 00:15:06.171 "nvme_io": false, 00:15:06.171 "nvme_io_md": false, 00:15:06.171 "write_zeroes": true, 00:15:06.171 "zcopy": true, 00:15:06.171 "get_zone_info": false, 00:15:06.171 "zone_management": false, 00:15:06.171 "zone_append": false, 00:15:06.171 "compare": false, 00:15:06.171 "compare_and_write": false, 00:15:06.171 "abort": true, 00:15:06.171 "seek_hole": false, 00:15:06.171 "seek_data": false, 00:15:06.171 "copy": true, 00:15:06.171 "nvme_iov_md": false 00:15:06.171 }, 00:15:06.171 "memory_domains": [ 00:15:06.171 { 00:15:06.171 "dma_device_id": "system", 00:15:06.171 "dma_device_type": 1 00:15:06.171 }, 00:15:06.171 { 00:15:06.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.171 "dma_device_type": 2 00:15:06.171 } 00:15:06.171 ], 00:15:06.171 "driver_specific": {} 00:15:06.171 } 00:15:06.171 ] 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.171 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.444 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:06.444 "name": "Existed_Raid", 00:15:06.444 "uuid": "3a7d1747-76fa-4622-a421-492dbc1e88be", 00:15:06.444 "strip_size_kb": 64, 00:15:06.444 "state": "online", 00:15:06.444 "raid_level": "concat", 00:15:06.444 "superblock": false, 00:15:06.444 "num_base_bdevs": 3, 00:15:06.444 "num_base_bdevs_discovered": 3, 00:15:06.444 "num_base_bdevs_operational": 3, 00:15:06.444 "base_bdevs_list": [ 00:15:06.444 { 00:15:06.444 "name": "BaseBdev1", 00:15:06.444 "uuid": "d91a38f3-090d-42f8-8668-69618fa9afa2", 00:15:06.444 "is_configured": true, 00:15:06.444 "data_offset": 0, 00:15:06.444 "data_size": 65536 00:15:06.444 }, 00:15:06.444 { 00:15:06.444 "name": "BaseBdev2", 00:15:06.444 "uuid": "3e0a2bb8-4f86-471a-a0c6-3879948cb579", 00:15:06.444 "is_configured": true, 00:15:06.444 "data_offset": 0, 00:15:06.444 "data_size": 65536 00:15:06.444 }, 00:15:06.444 { 00:15:06.444 "name": "BaseBdev3", 00:15:06.444 "uuid": "66f2ea49-158a-4538-aa60-4b44d94402ad", 00:15:06.444 "is_configured": true, 00:15:06.444 "data_offset": 0, 00:15:06.444 "data_size": 65536 00:15:06.444 } 00:15:06.444 ] 00:15:06.444 }' 00:15:06.444 13:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:06.444 13:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.012 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:07.012 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:07.012 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:07.012 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:07.012 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:07.012 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:07.012 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:07.012 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:07.271 [2024-07-26 13:14:47.627302] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.271 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:07.271 "name": "Existed_Raid", 00:15:07.271 "aliases": [ 00:15:07.271 "3a7d1747-76fa-4622-a421-492dbc1e88be" 00:15:07.271 ], 00:15:07.271 "product_name": "Raid Volume", 00:15:07.271 "block_size": 512, 00:15:07.271 "num_blocks": 196608, 00:15:07.271 "uuid": "3a7d1747-76fa-4622-a421-492dbc1e88be", 00:15:07.271 "assigned_rate_limits": { 00:15:07.271 "rw_ios_per_sec": 0, 00:15:07.271 "rw_mbytes_per_sec": 0, 00:15:07.271 "r_mbytes_per_sec": 0, 00:15:07.271 "w_mbytes_per_sec": 0 00:15:07.271 }, 00:15:07.271 "claimed": false, 00:15:07.271 "zoned": false, 00:15:07.271 "supported_io_types": { 00:15:07.271 "read": true, 00:15:07.271 "write": true, 00:15:07.271 "unmap": true, 00:15:07.271 "flush": true, 00:15:07.271 "reset": true, 00:15:07.271 "nvme_admin": false, 00:15:07.271 "nvme_io": false, 00:15:07.271 "nvme_io_md": false, 00:15:07.271 "write_zeroes": true, 00:15:07.271 "zcopy": false, 00:15:07.271 "get_zone_info": false, 00:15:07.271 "zone_management": false, 00:15:07.271 "zone_append": false, 00:15:07.271 "compare": false, 00:15:07.271 "compare_and_write": false, 00:15:07.271 "abort": false, 00:15:07.271 "seek_hole": false, 00:15:07.271 "seek_data": false, 00:15:07.271 "copy": false, 00:15:07.271 "nvme_iov_md": false 00:15:07.271 }, 00:15:07.271 "memory_domains": [ 00:15:07.271 { 00:15:07.271 "dma_device_id": "system", 00:15:07.271 "dma_device_type": 1 00:15:07.271 }, 00:15:07.271 { 00:15:07.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.271 "dma_device_type": 2 00:15:07.271 }, 00:15:07.271 { 00:15:07.271 "dma_device_id": "system", 00:15:07.271 "dma_device_type": 1 00:15:07.271 }, 00:15:07.271 { 00:15:07.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.271 "dma_device_type": 2 00:15:07.271 }, 00:15:07.271 { 00:15:07.271 "dma_device_id": "system", 00:15:07.271 "dma_device_type": 1 00:15:07.271 }, 00:15:07.271 { 00:15:07.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.271 "dma_device_type": 2 00:15:07.271 } 00:15:07.271 ], 00:15:07.271 "driver_specific": { 00:15:07.271 "raid": { 00:15:07.271 "uuid": "3a7d1747-76fa-4622-a421-492dbc1e88be", 00:15:07.271 "strip_size_kb": 64, 00:15:07.271 "state": "online", 00:15:07.271 "raid_level": "concat", 00:15:07.271 "superblock": false, 00:15:07.271 "num_base_bdevs": 3, 00:15:07.271 "num_base_bdevs_discovered": 3, 00:15:07.271 "num_base_bdevs_operational": 3, 00:15:07.271 "base_bdevs_list": [ 00:15:07.271 { 00:15:07.271 "name": "BaseBdev1", 00:15:07.271 "uuid": "d91a38f3-090d-42f8-8668-69618fa9afa2", 00:15:07.271 "is_configured": true, 00:15:07.271 "data_offset": 0, 00:15:07.271 "data_size": 65536 00:15:07.271 }, 00:15:07.271 { 00:15:07.271 "name": "BaseBdev2", 00:15:07.271 "uuid": "3e0a2bb8-4f86-471a-a0c6-3879948cb579", 00:15:07.271 "is_configured": true, 00:15:07.271 "data_offset": 0, 00:15:07.271 "data_size": 65536 00:15:07.271 }, 00:15:07.271 { 00:15:07.271 "name": "BaseBdev3", 00:15:07.271 "uuid": "66f2ea49-158a-4538-aa60-4b44d94402ad", 00:15:07.271 "is_configured": true, 00:15:07.271 "data_offset": 0, 00:15:07.271 "data_size": 65536 00:15:07.271 } 00:15:07.271 ] 00:15:07.271 } 00:15:07.271 } 00:15:07.271 }' 00:15:07.271 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.271 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:07.271 BaseBdev2 00:15:07.271 BaseBdev3' 00:15:07.271 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.271 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:07.271 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.530 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.530 "name": "BaseBdev1", 00:15:07.530 "aliases": [ 00:15:07.530 "d91a38f3-090d-42f8-8668-69618fa9afa2" 00:15:07.530 ], 00:15:07.530 "product_name": "Malloc disk", 00:15:07.530 "block_size": 512, 00:15:07.530 "num_blocks": 65536, 00:15:07.530 "uuid": "d91a38f3-090d-42f8-8668-69618fa9afa2", 00:15:07.530 "assigned_rate_limits": { 00:15:07.530 "rw_ios_per_sec": 0, 00:15:07.530 "rw_mbytes_per_sec": 0, 00:15:07.530 "r_mbytes_per_sec": 0, 00:15:07.530 "w_mbytes_per_sec": 0 00:15:07.530 }, 00:15:07.530 "claimed": true, 00:15:07.530 "claim_type": "exclusive_write", 00:15:07.530 "zoned": false, 00:15:07.530 "supported_io_types": { 00:15:07.530 "read": true, 00:15:07.530 "write": true, 00:15:07.530 "unmap": true, 00:15:07.530 "flush": true, 00:15:07.530 "reset": true, 00:15:07.530 "nvme_admin": false, 00:15:07.530 "nvme_io": false, 00:15:07.530 "nvme_io_md": false, 00:15:07.530 "write_zeroes": true, 00:15:07.530 "zcopy": true, 00:15:07.530 "get_zone_info": false, 00:15:07.530 "zone_management": false, 00:15:07.530 "zone_append": false, 00:15:07.530 "compare": false, 00:15:07.530 "compare_and_write": false, 00:15:07.530 "abort": true, 00:15:07.530 "seek_hole": false, 00:15:07.530 "seek_data": false, 00:15:07.530 "copy": true, 00:15:07.530 "nvme_iov_md": false 00:15:07.530 }, 00:15:07.530 "memory_domains": [ 00:15:07.530 { 00:15:07.530 "dma_device_id": "system", 00:15:07.530 "dma_device_type": 1 00:15:07.530 }, 00:15:07.530 { 00:15:07.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.530 "dma_device_type": 2 00:15:07.530 } 00:15:07.530 ], 00:15:07.530 "driver_specific": {} 00:15:07.530 }' 00:15:07.530 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.530 13:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.530 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:07.530 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.789 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.789 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.789 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.789 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.789 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.789 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.789 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.789 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.789 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.789 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:07.789 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.048 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.048 "name": "BaseBdev2", 00:15:08.048 "aliases": [ 00:15:08.048 "3e0a2bb8-4f86-471a-a0c6-3879948cb579" 00:15:08.048 ], 00:15:08.048 "product_name": "Malloc disk", 00:15:08.048 "block_size": 512, 00:15:08.048 "num_blocks": 65536, 00:15:08.048 "uuid": "3e0a2bb8-4f86-471a-a0c6-3879948cb579", 00:15:08.048 "assigned_rate_limits": { 00:15:08.048 "rw_ios_per_sec": 0, 00:15:08.048 "rw_mbytes_per_sec": 0, 00:15:08.048 "r_mbytes_per_sec": 0, 00:15:08.048 "w_mbytes_per_sec": 0 00:15:08.048 }, 00:15:08.048 "claimed": true, 00:15:08.048 "claim_type": "exclusive_write", 00:15:08.048 "zoned": false, 00:15:08.048 "supported_io_types": { 00:15:08.048 "read": true, 00:15:08.048 "write": true, 00:15:08.048 "unmap": true, 00:15:08.048 "flush": true, 00:15:08.048 "reset": true, 00:15:08.048 "nvme_admin": false, 00:15:08.048 "nvme_io": false, 00:15:08.048 "nvme_io_md": false, 00:15:08.048 "write_zeroes": true, 00:15:08.048 "zcopy": true, 00:15:08.048 "get_zone_info": false, 00:15:08.048 "zone_management": false, 00:15:08.048 "zone_append": false, 00:15:08.048 "compare": false, 00:15:08.048 "compare_and_write": false, 00:15:08.048 "abort": true, 00:15:08.048 "seek_hole": false, 00:15:08.048 "seek_data": false, 00:15:08.048 "copy": true, 00:15:08.048 "nvme_iov_md": false 00:15:08.048 }, 00:15:08.048 "memory_domains": [ 00:15:08.048 { 00:15:08.048 "dma_device_id": "system", 00:15:08.048 "dma_device_type": 1 00:15:08.048 }, 00:15:08.048 { 00:15:08.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.048 "dma_device_type": 2 00:15:08.048 } 00:15:08.048 ], 00:15:08.048 "driver_specific": {} 00:15:08.048 }' 00:15:08.048 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.048 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.048 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:08.048 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.048 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.307 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:08.307 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.307 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.307 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:08.307 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.307 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.307 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:08.307 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:08.307 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:08.307 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.566 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.566 "name": "BaseBdev3", 00:15:08.566 "aliases": [ 00:15:08.566 "66f2ea49-158a-4538-aa60-4b44d94402ad" 00:15:08.566 ], 00:15:08.566 "product_name": "Malloc disk", 00:15:08.566 "block_size": 512, 00:15:08.566 "num_blocks": 65536, 00:15:08.566 "uuid": "66f2ea49-158a-4538-aa60-4b44d94402ad", 00:15:08.566 "assigned_rate_limits": { 00:15:08.566 "rw_ios_per_sec": 0, 00:15:08.566 "rw_mbytes_per_sec": 0, 00:15:08.566 "r_mbytes_per_sec": 0, 00:15:08.566 "w_mbytes_per_sec": 0 00:15:08.566 }, 00:15:08.566 "claimed": true, 00:15:08.566 "claim_type": "exclusive_write", 00:15:08.566 "zoned": false, 00:15:08.566 "supported_io_types": { 00:15:08.566 "read": true, 00:15:08.566 "write": true, 00:15:08.566 "unmap": true, 00:15:08.566 "flush": true, 00:15:08.566 "reset": true, 00:15:08.566 "nvme_admin": false, 00:15:08.566 "nvme_io": false, 00:15:08.566 "nvme_io_md": false, 00:15:08.566 "write_zeroes": true, 00:15:08.566 "zcopy": true, 00:15:08.566 "get_zone_info": false, 00:15:08.566 "zone_management": false, 00:15:08.566 "zone_append": false, 00:15:08.566 "compare": false, 00:15:08.566 "compare_and_write": false, 00:15:08.566 "abort": true, 00:15:08.566 "seek_hole": false, 00:15:08.566 "seek_data": false, 00:15:08.566 "copy": true, 00:15:08.566 "nvme_iov_md": false 00:15:08.566 }, 00:15:08.566 "memory_domains": [ 00:15:08.566 { 00:15:08.566 "dma_device_id": "system", 00:15:08.566 "dma_device_type": 1 00:15:08.566 }, 00:15:08.566 { 00:15:08.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.566 "dma_device_type": 2 00:15:08.566 } 00:15:08.566 ], 00:15:08.566 "driver_specific": {} 00:15:08.566 }' 00:15:08.566 13:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.566 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.566 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:08.566 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.826 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.826 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:08.826 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.826 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.826 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:08.826 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.826 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.826 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:08.826 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:09.085 [2024-07-26 13:14:49.524063] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:09.085 [2024-07-26 13:14:49.524087] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.085 [2024-07-26 13:14:49.524127] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.085 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.345 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.345 "name": "Existed_Raid", 00:15:09.345 "uuid": "3a7d1747-76fa-4622-a421-492dbc1e88be", 00:15:09.345 "strip_size_kb": 64, 00:15:09.345 "state": "offline", 00:15:09.345 "raid_level": "concat", 00:15:09.345 "superblock": false, 00:15:09.345 "num_base_bdevs": 3, 00:15:09.345 "num_base_bdevs_discovered": 2, 00:15:09.345 "num_base_bdevs_operational": 2, 00:15:09.345 "base_bdevs_list": [ 00:15:09.345 { 00:15:09.345 "name": null, 00:15:09.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.345 "is_configured": false, 00:15:09.345 "data_offset": 0, 00:15:09.345 "data_size": 65536 00:15:09.345 }, 00:15:09.345 { 00:15:09.345 "name": "BaseBdev2", 00:15:09.345 "uuid": "3e0a2bb8-4f86-471a-a0c6-3879948cb579", 00:15:09.345 "is_configured": true, 00:15:09.345 "data_offset": 0, 00:15:09.345 "data_size": 65536 00:15:09.345 }, 00:15:09.345 { 00:15:09.345 "name": "BaseBdev3", 00:15:09.345 "uuid": "66f2ea49-158a-4538-aa60-4b44d94402ad", 00:15:09.345 "is_configured": true, 00:15:09.345 "data_offset": 0, 00:15:09.345 "data_size": 65536 00:15:09.345 } 00:15:09.345 ] 00:15:09.345 }' 00:15:09.345 13:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.345 13:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.913 13:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:09.913 13:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:09.913 13:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.913 13:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:10.172 13:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:10.172 13:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:10.172 13:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:10.431 [2024-07-26 13:14:50.772496] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:10.431 13:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:10.431 13:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:10.431 13:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.431 13:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:10.690 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:10.690 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:10.690 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:10.949 [2024-07-26 13:14:51.239786] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:10.949 [2024-07-26 13:14:51.239829] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2327710 name Existed_Raid, state offline 00:15:10.949 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:10.949 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:10.949 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.949 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:11.209 BaseBdev2 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:11.209 13:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:11.468 13:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:11.728 [ 00:15:11.728 { 00:15:11.728 "name": "BaseBdev2", 00:15:11.728 "aliases": [ 00:15:11.728 "1e997f0d-9eec-4abb-a331-0b3858d0e811" 00:15:11.728 ], 00:15:11.728 "product_name": "Malloc disk", 00:15:11.728 "block_size": 512, 00:15:11.728 "num_blocks": 65536, 00:15:11.728 "uuid": "1e997f0d-9eec-4abb-a331-0b3858d0e811", 00:15:11.728 "assigned_rate_limits": { 00:15:11.728 "rw_ios_per_sec": 0, 00:15:11.728 "rw_mbytes_per_sec": 0, 00:15:11.728 "r_mbytes_per_sec": 0, 00:15:11.728 "w_mbytes_per_sec": 0 00:15:11.728 }, 00:15:11.728 "claimed": false, 00:15:11.728 "zoned": false, 00:15:11.728 "supported_io_types": { 00:15:11.728 "read": true, 00:15:11.728 "write": true, 00:15:11.728 "unmap": true, 00:15:11.728 "flush": true, 00:15:11.728 "reset": true, 00:15:11.728 "nvme_admin": false, 00:15:11.728 "nvme_io": false, 00:15:11.728 "nvme_io_md": false, 00:15:11.728 "write_zeroes": true, 00:15:11.728 "zcopy": true, 00:15:11.728 "get_zone_info": false, 00:15:11.728 "zone_management": false, 00:15:11.728 "zone_append": false, 00:15:11.728 "compare": false, 00:15:11.728 "compare_and_write": false, 00:15:11.728 "abort": true, 00:15:11.728 "seek_hole": false, 00:15:11.728 "seek_data": false, 00:15:11.728 "copy": true, 00:15:11.728 "nvme_iov_md": false 00:15:11.728 }, 00:15:11.728 "memory_domains": [ 00:15:11.728 { 00:15:11.728 "dma_device_id": "system", 00:15:11.728 "dma_device_type": 1 00:15:11.728 }, 00:15:11.728 { 00:15:11.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.728 "dma_device_type": 2 00:15:11.728 } 00:15:11.728 ], 00:15:11.728 "driver_specific": {} 00:15:11.728 } 00:15:11.728 ] 00:15:11.728 13:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:11.728 13:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:11.728 13:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:11.728 13:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:11.987 BaseBdev3 00:15:11.987 13:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:11.987 13:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:11.987 13:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:11.987 13:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:11.987 13:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:11.987 13:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:11.987 13:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.247 13:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:12.506 [ 00:15:12.506 { 00:15:12.506 "name": "BaseBdev3", 00:15:12.506 "aliases": [ 00:15:12.506 "d71dddb0-afcc-4fa0-9853-d712a9d1af83" 00:15:12.506 ], 00:15:12.506 "product_name": "Malloc disk", 00:15:12.506 "block_size": 512, 00:15:12.506 "num_blocks": 65536, 00:15:12.506 "uuid": "d71dddb0-afcc-4fa0-9853-d712a9d1af83", 00:15:12.506 "assigned_rate_limits": { 00:15:12.506 "rw_ios_per_sec": 0, 00:15:12.506 "rw_mbytes_per_sec": 0, 00:15:12.506 "r_mbytes_per_sec": 0, 00:15:12.506 "w_mbytes_per_sec": 0 00:15:12.506 }, 00:15:12.506 "claimed": false, 00:15:12.506 "zoned": false, 00:15:12.506 "supported_io_types": { 00:15:12.506 "read": true, 00:15:12.506 "write": true, 00:15:12.506 "unmap": true, 00:15:12.506 "flush": true, 00:15:12.506 "reset": true, 00:15:12.506 "nvme_admin": false, 00:15:12.506 "nvme_io": false, 00:15:12.506 "nvme_io_md": false, 00:15:12.506 "write_zeroes": true, 00:15:12.506 "zcopy": true, 00:15:12.506 "get_zone_info": false, 00:15:12.506 "zone_management": false, 00:15:12.506 "zone_append": false, 00:15:12.506 "compare": false, 00:15:12.506 "compare_and_write": false, 00:15:12.506 "abort": true, 00:15:12.506 "seek_hole": false, 00:15:12.506 "seek_data": false, 00:15:12.506 "copy": true, 00:15:12.506 "nvme_iov_md": false 00:15:12.506 }, 00:15:12.506 "memory_domains": [ 00:15:12.506 { 00:15:12.506 "dma_device_id": "system", 00:15:12.506 "dma_device_type": 1 00:15:12.506 }, 00:15:12.506 { 00:15:12.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.506 "dma_device_type": 2 00:15:12.506 } 00:15:12.506 ], 00:15:12.506 "driver_specific": {} 00:15:12.507 } 00:15:12.507 ] 00:15:12.507 13:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:12.507 13:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:12.507 13:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:12.507 13:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:12.766 [2024-07-26 13:14:53.107330] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.766 [2024-07-26 13:14:53.107369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.766 [2024-07-26 13:14:53.107388] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.766 [2024-07-26 13:14:53.108638] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.766 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:12.766 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:12.766 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:12.766 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:12.766 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.766 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:12.766 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.766 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.766 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.766 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.766 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.766 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.025 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:13.025 "name": "Existed_Raid", 00:15:13.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.025 "strip_size_kb": 64, 00:15:13.025 "state": "configuring", 00:15:13.025 "raid_level": "concat", 00:15:13.025 "superblock": false, 00:15:13.025 "num_base_bdevs": 3, 00:15:13.025 "num_base_bdevs_discovered": 2, 00:15:13.025 "num_base_bdevs_operational": 3, 00:15:13.025 "base_bdevs_list": [ 00:15:13.025 { 00:15:13.025 "name": "BaseBdev1", 00:15:13.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.025 "is_configured": false, 00:15:13.025 "data_offset": 0, 00:15:13.025 "data_size": 0 00:15:13.025 }, 00:15:13.025 { 00:15:13.025 "name": "BaseBdev2", 00:15:13.025 "uuid": "1e997f0d-9eec-4abb-a331-0b3858d0e811", 00:15:13.025 "is_configured": true, 00:15:13.025 "data_offset": 0, 00:15:13.025 "data_size": 65536 00:15:13.025 }, 00:15:13.025 { 00:15:13.025 "name": "BaseBdev3", 00:15:13.025 "uuid": "d71dddb0-afcc-4fa0-9853-d712a9d1af83", 00:15:13.025 "is_configured": true, 00:15:13.025 "data_offset": 0, 00:15:13.025 "data_size": 65536 00:15:13.025 } 00:15:13.025 ] 00:15:13.025 }' 00:15:13.025 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:13.025 13:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.592 13:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:13.851 [2024-07-26 13:14:54.138037] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:13.851 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:13.851 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:13.851 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:13.851 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:13.851 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:13.851 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:13.851 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:13.851 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:13.851 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:13.851 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:13.851 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.851 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.110 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.110 "name": "Existed_Raid", 00:15:14.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.110 "strip_size_kb": 64, 00:15:14.110 "state": "configuring", 00:15:14.110 "raid_level": "concat", 00:15:14.110 "superblock": false, 00:15:14.110 "num_base_bdevs": 3, 00:15:14.110 "num_base_bdevs_discovered": 1, 00:15:14.110 "num_base_bdevs_operational": 3, 00:15:14.110 "base_bdevs_list": [ 00:15:14.110 { 00:15:14.110 "name": "BaseBdev1", 00:15:14.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.110 "is_configured": false, 00:15:14.110 "data_offset": 0, 00:15:14.110 "data_size": 0 00:15:14.110 }, 00:15:14.110 { 00:15:14.110 "name": null, 00:15:14.110 "uuid": "1e997f0d-9eec-4abb-a331-0b3858d0e811", 00:15:14.110 "is_configured": false, 00:15:14.110 "data_offset": 0, 00:15:14.110 "data_size": 65536 00:15:14.110 }, 00:15:14.110 { 00:15:14.110 "name": "BaseBdev3", 00:15:14.110 "uuid": "d71dddb0-afcc-4fa0-9853-d712a9d1af83", 00:15:14.110 "is_configured": true, 00:15:14.110 "data_offset": 0, 00:15:14.110 "data_size": 65536 00:15:14.110 } 00:15:14.110 ] 00:15:14.110 }' 00:15:14.110 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.110 13:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.677 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.677 13:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:14.677 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:14.677 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:14.936 [2024-07-26 13:14:55.396509] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.936 BaseBdev1 00:15:14.936 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:14.936 13:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:14.936 13:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:14.936 13:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:14.936 13:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:14.936 13:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:14.936 13:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.195 13:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:15.454 [ 00:15:15.454 { 00:15:15.454 "name": "BaseBdev1", 00:15:15.454 "aliases": [ 00:15:15.454 "d09d13e8-045f-47a7-9bb5-789d4627d02a" 00:15:15.454 ], 00:15:15.454 "product_name": "Malloc disk", 00:15:15.454 "block_size": 512, 00:15:15.454 "num_blocks": 65536, 00:15:15.454 "uuid": "d09d13e8-045f-47a7-9bb5-789d4627d02a", 00:15:15.454 "assigned_rate_limits": { 00:15:15.454 "rw_ios_per_sec": 0, 00:15:15.454 "rw_mbytes_per_sec": 0, 00:15:15.454 "r_mbytes_per_sec": 0, 00:15:15.454 "w_mbytes_per_sec": 0 00:15:15.454 }, 00:15:15.454 "claimed": true, 00:15:15.454 "claim_type": "exclusive_write", 00:15:15.454 "zoned": false, 00:15:15.454 "supported_io_types": { 00:15:15.454 "read": true, 00:15:15.454 "write": true, 00:15:15.454 "unmap": true, 00:15:15.454 "flush": true, 00:15:15.454 "reset": true, 00:15:15.454 "nvme_admin": false, 00:15:15.454 "nvme_io": false, 00:15:15.454 "nvme_io_md": false, 00:15:15.454 "write_zeroes": true, 00:15:15.454 "zcopy": true, 00:15:15.454 "get_zone_info": false, 00:15:15.454 "zone_management": false, 00:15:15.454 "zone_append": false, 00:15:15.454 "compare": false, 00:15:15.454 "compare_and_write": false, 00:15:15.454 "abort": true, 00:15:15.454 "seek_hole": false, 00:15:15.454 "seek_data": false, 00:15:15.454 "copy": true, 00:15:15.454 "nvme_iov_md": false 00:15:15.454 }, 00:15:15.454 "memory_domains": [ 00:15:15.454 { 00:15:15.455 "dma_device_id": "system", 00:15:15.455 "dma_device_type": 1 00:15:15.455 }, 00:15:15.455 { 00:15:15.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.455 "dma_device_type": 2 00:15:15.455 } 00:15:15.455 ], 00:15:15.455 "driver_specific": {} 00:15:15.455 } 00:15:15.455 ] 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.455 13:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.714 13:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.714 "name": "Existed_Raid", 00:15:15.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.714 "strip_size_kb": 64, 00:15:15.714 "state": "configuring", 00:15:15.714 "raid_level": "concat", 00:15:15.714 "superblock": false, 00:15:15.714 "num_base_bdevs": 3, 00:15:15.714 "num_base_bdevs_discovered": 2, 00:15:15.714 "num_base_bdevs_operational": 3, 00:15:15.714 "base_bdevs_list": [ 00:15:15.714 { 00:15:15.714 "name": "BaseBdev1", 00:15:15.714 "uuid": "d09d13e8-045f-47a7-9bb5-789d4627d02a", 00:15:15.714 "is_configured": true, 00:15:15.714 "data_offset": 0, 00:15:15.714 "data_size": 65536 00:15:15.714 }, 00:15:15.714 { 00:15:15.714 "name": null, 00:15:15.714 "uuid": "1e997f0d-9eec-4abb-a331-0b3858d0e811", 00:15:15.714 "is_configured": false, 00:15:15.714 "data_offset": 0, 00:15:15.714 "data_size": 65536 00:15:15.714 }, 00:15:15.714 { 00:15:15.714 "name": "BaseBdev3", 00:15:15.714 "uuid": "d71dddb0-afcc-4fa0-9853-d712a9d1af83", 00:15:15.714 "is_configured": true, 00:15:15.714 "data_offset": 0, 00:15:15.714 "data_size": 65536 00:15:15.714 } 00:15:15.714 ] 00:15:15.714 }' 00:15:15.714 13:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.714 13:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.651 13:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.651 13:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:16.651 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:16.651 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:16.910 [2024-07-26 13:14:57.365722] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:16.910 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:16.910 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:16.910 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:16.910 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:16.910 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:16.910 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:16.910 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:16.910 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:16.910 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:16.910 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:16.910 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.910 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.169 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:17.169 "name": "Existed_Raid", 00:15:17.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.169 "strip_size_kb": 64, 00:15:17.169 "state": "configuring", 00:15:17.169 "raid_level": "concat", 00:15:17.169 "superblock": false, 00:15:17.169 "num_base_bdevs": 3, 00:15:17.169 "num_base_bdevs_discovered": 1, 00:15:17.169 "num_base_bdevs_operational": 3, 00:15:17.169 "base_bdevs_list": [ 00:15:17.169 { 00:15:17.169 "name": "BaseBdev1", 00:15:17.169 "uuid": "d09d13e8-045f-47a7-9bb5-789d4627d02a", 00:15:17.169 "is_configured": true, 00:15:17.169 "data_offset": 0, 00:15:17.169 "data_size": 65536 00:15:17.169 }, 00:15:17.169 { 00:15:17.169 "name": null, 00:15:17.169 "uuid": "1e997f0d-9eec-4abb-a331-0b3858d0e811", 00:15:17.169 "is_configured": false, 00:15:17.169 "data_offset": 0, 00:15:17.169 "data_size": 65536 00:15:17.169 }, 00:15:17.169 { 00:15:17.169 "name": null, 00:15:17.169 "uuid": "d71dddb0-afcc-4fa0-9853-d712a9d1af83", 00:15:17.169 "is_configured": false, 00:15:17.169 "data_offset": 0, 00:15:17.169 "data_size": 65536 00:15:17.169 } 00:15:17.169 ] 00:15:17.169 }' 00:15:17.169 13:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:17.169 13:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.737 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.737 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:17.996 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:17.996 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:18.255 [2024-07-26 13:14:58.613166] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.255 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:18.255 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.255 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:18.255 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.255 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.255 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:18.255 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.255 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.255 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.255 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.255 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.255 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.514 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:18.514 "name": "Existed_Raid", 00:15:18.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.514 "strip_size_kb": 64, 00:15:18.514 "state": "configuring", 00:15:18.514 "raid_level": "concat", 00:15:18.514 "superblock": false, 00:15:18.514 "num_base_bdevs": 3, 00:15:18.514 "num_base_bdevs_discovered": 2, 00:15:18.514 "num_base_bdevs_operational": 3, 00:15:18.514 "base_bdevs_list": [ 00:15:18.514 { 00:15:18.514 "name": "BaseBdev1", 00:15:18.514 "uuid": "d09d13e8-045f-47a7-9bb5-789d4627d02a", 00:15:18.514 "is_configured": true, 00:15:18.514 "data_offset": 0, 00:15:18.514 "data_size": 65536 00:15:18.514 }, 00:15:18.514 { 00:15:18.514 "name": null, 00:15:18.514 "uuid": "1e997f0d-9eec-4abb-a331-0b3858d0e811", 00:15:18.514 "is_configured": false, 00:15:18.514 "data_offset": 0, 00:15:18.514 "data_size": 65536 00:15:18.514 }, 00:15:18.514 { 00:15:18.514 "name": "BaseBdev3", 00:15:18.514 "uuid": "d71dddb0-afcc-4fa0-9853-d712a9d1af83", 00:15:18.514 "is_configured": true, 00:15:18.514 "data_offset": 0, 00:15:18.514 "data_size": 65536 00:15:18.514 } 00:15:18.514 ] 00:15:18.514 }' 00:15:18.514 13:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:18.514 13:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.120 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.120 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:19.379 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:19.379 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:19.379 [2024-07-26 13:14:59.876708] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:19.379 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:19.637 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:19.637 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:19.637 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:19.637 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:19.637 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:19.637 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:19.637 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:19.637 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:19.638 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:19.638 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.638 13:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.638 13:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.638 "name": "Existed_Raid", 00:15:19.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.638 "strip_size_kb": 64, 00:15:19.638 "state": "configuring", 00:15:19.638 "raid_level": "concat", 00:15:19.638 "superblock": false, 00:15:19.638 "num_base_bdevs": 3, 00:15:19.638 "num_base_bdevs_discovered": 1, 00:15:19.638 "num_base_bdevs_operational": 3, 00:15:19.638 "base_bdevs_list": [ 00:15:19.638 { 00:15:19.638 "name": null, 00:15:19.638 "uuid": "d09d13e8-045f-47a7-9bb5-789d4627d02a", 00:15:19.638 "is_configured": false, 00:15:19.638 "data_offset": 0, 00:15:19.638 "data_size": 65536 00:15:19.638 }, 00:15:19.638 { 00:15:19.638 "name": null, 00:15:19.638 "uuid": "1e997f0d-9eec-4abb-a331-0b3858d0e811", 00:15:19.638 "is_configured": false, 00:15:19.638 "data_offset": 0, 00:15:19.638 "data_size": 65536 00:15:19.638 }, 00:15:19.638 { 00:15:19.638 "name": "BaseBdev3", 00:15:19.638 "uuid": "d71dddb0-afcc-4fa0-9853-d712a9d1af83", 00:15:19.638 "is_configured": true, 00:15:19.638 "data_offset": 0, 00:15:19.638 "data_size": 65536 00:15:19.638 } 00:15:19.638 ] 00:15:19.638 }' 00:15:19.638 13:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.638 13:15:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.205 13:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.205 13:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:20.464 13:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:20.464 13:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:20.723 [2024-07-26 13:15:01.126030] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.723 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:20.723 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:20.723 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:20.723 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:20.723 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.723 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:20.723 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.723 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.723 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.723 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.723 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.723 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.983 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.983 "name": "Existed_Raid", 00:15:20.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.983 "strip_size_kb": 64, 00:15:20.983 "state": "configuring", 00:15:20.983 "raid_level": "concat", 00:15:20.983 "superblock": false, 00:15:20.983 "num_base_bdevs": 3, 00:15:20.983 "num_base_bdevs_discovered": 2, 00:15:20.983 "num_base_bdevs_operational": 3, 00:15:20.983 "base_bdevs_list": [ 00:15:20.983 { 00:15:20.983 "name": null, 00:15:20.983 "uuid": "d09d13e8-045f-47a7-9bb5-789d4627d02a", 00:15:20.983 "is_configured": false, 00:15:20.983 "data_offset": 0, 00:15:20.983 "data_size": 65536 00:15:20.983 }, 00:15:20.983 { 00:15:20.983 "name": "BaseBdev2", 00:15:20.983 "uuid": "1e997f0d-9eec-4abb-a331-0b3858d0e811", 00:15:20.983 "is_configured": true, 00:15:20.983 "data_offset": 0, 00:15:20.983 "data_size": 65536 00:15:20.983 }, 00:15:20.983 { 00:15:20.983 "name": "BaseBdev3", 00:15:20.983 "uuid": "d71dddb0-afcc-4fa0-9853-d712a9d1af83", 00:15:20.983 "is_configured": true, 00:15:20.983 "data_offset": 0, 00:15:20.983 "data_size": 65536 00:15:20.983 } 00:15:20.983 ] 00:15:20.983 }' 00:15:20.983 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.983 13:15:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.552 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.552 13:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:21.811 13:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:21.811 13:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.811 13:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:22.070 13:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d09d13e8-045f-47a7-9bb5-789d4627d02a 00:15:22.070 [2024-07-26 13:15:02.581111] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:22.070 [2024-07-26 13:15:02.581156] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x231e620 00:15:22.070 [2024-07-26 13:15:02.581164] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:22.070 [2024-07-26 13:15:02.581345] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24d88e0 00:15:22.070 [2024-07-26 13:15:02.581452] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x231e620 00:15:22.070 [2024-07-26 13:15:02.581461] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x231e620 00:15:22.070 [2024-07-26 13:15:02.581614] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.070 NewBaseBdev 00:15:22.329 13:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:22.329 13:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:22.329 13:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:22.329 13:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:22.329 13:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:22.329 13:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:22.329 13:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:22.329 13:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:22.588 [ 00:15:22.588 { 00:15:22.588 "name": "NewBaseBdev", 00:15:22.588 "aliases": [ 00:15:22.588 "d09d13e8-045f-47a7-9bb5-789d4627d02a" 00:15:22.588 ], 00:15:22.588 "product_name": "Malloc disk", 00:15:22.588 "block_size": 512, 00:15:22.588 "num_blocks": 65536, 00:15:22.588 "uuid": "d09d13e8-045f-47a7-9bb5-789d4627d02a", 00:15:22.588 "assigned_rate_limits": { 00:15:22.588 "rw_ios_per_sec": 0, 00:15:22.588 "rw_mbytes_per_sec": 0, 00:15:22.588 "r_mbytes_per_sec": 0, 00:15:22.588 "w_mbytes_per_sec": 0 00:15:22.588 }, 00:15:22.588 "claimed": true, 00:15:22.588 "claim_type": "exclusive_write", 00:15:22.588 "zoned": false, 00:15:22.588 "supported_io_types": { 00:15:22.588 "read": true, 00:15:22.588 "write": true, 00:15:22.588 "unmap": true, 00:15:22.588 "flush": true, 00:15:22.588 "reset": true, 00:15:22.588 "nvme_admin": false, 00:15:22.588 "nvme_io": false, 00:15:22.588 "nvme_io_md": false, 00:15:22.588 "write_zeroes": true, 00:15:22.588 "zcopy": true, 00:15:22.588 "get_zone_info": false, 00:15:22.588 "zone_management": false, 00:15:22.588 "zone_append": false, 00:15:22.588 "compare": false, 00:15:22.588 "compare_and_write": false, 00:15:22.588 "abort": true, 00:15:22.588 "seek_hole": false, 00:15:22.588 "seek_data": false, 00:15:22.588 "copy": true, 00:15:22.588 "nvme_iov_md": false 00:15:22.588 }, 00:15:22.588 "memory_domains": [ 00:15:22.588 { 00:15:22.588 "dma_device_id": "system", 00:15:22.588 "dma_device_type": 1 00:15:22.588 }, 00:15:22.588 { 00:15:22.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.588 "dma_device_type": 2 00:15:22.588 } 00:15:22.588 ], 00:15:22.588 "driver_specific": {} 00:15:22.588 } 00:15:22.588 ] 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.589 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.851 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.851 "name": "Existed_Raid", 00:15:22.851 "uuid": "aadcb6a4-c26f-43bd-8170-1c30e768bcee", 00:15:22.851 "strip_size_kb": 64, 00:15:22.851 "state": "online", 00:15:22.851 "raid_level": "concat", 00:15:22.851 "superblock": false, 00:15:22.851 "num_base_bdevs": 3, 00:15:22.851 "num_base_bdevs_discovered": 3, 00:15:22.851 "num_base_bdevs_operational": 3, 00:15:22.851 "base_bdevs_list": [ 00:15:22.851 { 00:15:22.851 "name": "NewBaseBdev", 00:15:22.851 "uuid": "d09d13e8-045f-47a7-9bb5-789d4627d02a", 00:15:22.851 "is_configured": true, 00:15:22.851 "data_offset": 0, 00:15:22.851 "data_size": 65536 00:15:22.851 }, 00:15:22.851 { 00:15:22.851 "name": "BaseBdev2", 00:15:22.851 "uuid": "1e997f0d-9eec-4abb-a331-0b3858d0e811", 00:15:22.851 "is_configured": true, 00:15:22.851 "data_offset": 0, 00:15:22.851 "data_size": 65536 00:15:22.851 }, 00:15:22.851 { 00:15:22.851 "name": "BaseBdev3", 00:15:22.851 "uuid": "d71dddb0-afcc-4fa0-9853-d712a9d1af83", 00:15:22.851 "is_configured": true, 00:15:22.851 "data_offset": 0, 00:15:22.851 "data_size": 65536 00:15:22.851 } 00:15:22.851 ] 00:15:22.851 }' 00:15:22.851 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.851 13:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.419 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:23.419 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:23.419 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:23.419 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:23.419 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:23.419 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:23.419 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:23.419 13:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:23.678 [2024-07-26 13:15:04.057351] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.678 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:23.678 "name": "Existed_Raid", 00:15:23.678 "aliases": [ 00:15:23.678 "aadcb6a4-c26f-43bd-8170-1c30e768bcee" 00:15:23.678 ], 00:15:23.678 "product_name": "Raid Volume", 00:15:23.678 "block_size": 512, 00:15:23.678 "num_blocks": 196608, 00:15:23.678 "uuid": "aadcb6a4-c26f-43bd-8170-1c30e768bcee", 00:15:23.678 "assigned_rate_limits": { 00:15:23.678 "rw_ios_per_sec": 0, 00:15:23.678 "rw_mbytes_per_sec": 0, 00:15:23.678 "r_mbytes_per_sec": 0, 00:15:23.678 "w_mbytes_per_sec": 0 00:15:23.678 }, 00:15:23.678 "claimed": false, 00:15:23.678 "zoned": false, 00:15:23.678 "supported_io_types": { 00:15:23.678 "read": true, 00:15:23.678 "write": true, 00:15:23.678 "unmap": true, 00:15:23.678 "flush": true, 00:15:23.678 "reset": true, 00:15:23.678 "nvme_admin": false, 00:15:23.678 "nvme_io": false, 00:15:23.678 "nvme_io_md": false, 00:15:23.678 "write_zeroes": true, 00:15:23.678 "zcopy": false, 00:15:23.678 "get_zone_info": false, 00:15:23.678 "zone_management": false, 00:15:23.678 "zone_append": false, 00:15:23.678 "compare": false, 00:15:23.678 "compare_and_write": false, 00:15:23.678 "abort": false, 00:15:23.678 "seek_hole": false, 00:15:23.678 "seek_data": false, 00:15:23.678 "copy": false, 00:15:23.678 "nvme_iov_md": false 00:15:23.678 }, 00:15:23.678 "memory_domains": [ 00:15:23.678 { 00:15:23.678 "dma_device_id": "system", 00:15:23.678 "dma_device_type": 1 00:15:23.678 }, 00:15:23.678 { 00:15:23.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.678 "dma_device_type": 2 00:15:23.678 }, 00:15:23.678 { 00:15:23.678 "dma_device_id": "system", 00:15:23.678 "dma_device_type": 1 00:15:23.678 }, 00:15:23.678 { 00:15:23.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.678 "dma_device_type": 2 00:15:23.678 }, 00:15:23.678 { 00:15:23.678 "dma_device_id": "system", 00:15:23.678 "dma_device_type": 1 00:15:23.679 }, 00:15:23.679 { 00:15:23.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.679 "dma_device_type": 2 00:15:23.679 } 00:15:23.679 ], 00:15:23.679 "driver_specific": { 00:15:23.679 "raid": { 00:15:23.679 "uuid": "aadcb6a4-c26f-43bd-8170-1c30e768bcee", 00:15:23.679 "strip_size_kb": 64, 00:15:23.679 "state": "online", 00:15:23.679 "raid_level": "concat", 00:15:23.679 "superblock": false, 00:15:23.679 "num_base_bdevs": 3, 00:15:23.679 "num_base_bdevs_discovered": 3, 00:15:23.679 "num_base_bdevs_operational": 3, 00:15:23.679 "base_bdevs_list": [ 00:15:23.679 { 00:15:23.679 "name": "NewBaseBdev", 00:15:23.679 "uuid": "d09d13e8-045f-47a7-9bb5-789d4627d02a", 00:15:23.679 "is_configured": true, 00:15:23.679 "data_offset": 0, 00:15:23.679 "data_size": 65536 00:15:23.679 }, 00:15:23.679 { 00:15:23.679 "name": "BaseBdev2", 00:15:23.679 "uuid": "1e997f0d-9eec-4abb-a331-0b3858d0e811", 00:15:23.679 "is_configured": true, 00:15:23.679 "data_offset": 0, 00:15:23.679 "data_size": 65536 00:15:23.679 }, 00:15:23.679 { 00:15:23.679 "name": "BaseBdev3", 00:15:23.679 "uuid": "d71dddb0-afcc-4fa0-9853-d712a9d1af83", 00:15:23.679 "is_configured": true, 00:15:23.679 "data_offset": 0, 00:15:23.679 "data_size": 65536 00:15:23.679 } 00:15:23.679 ] 00:15:23.679 } 00:15:23.679 } 00:15:23.679 }' 00:15:23.679 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.679 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:23.679 BaseBdev2 00:15:23.679 BaseBdev3' 00:15:23.679 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:23.679 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:23.679 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:23.937 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:23.937 "name": "NewBaseBdev", 00:15:23.937 "aliases": [ 00:15:23.937 "d09d13e8-045f-47a7-9bb5-789d4627d02a" 00:15:23.937 ], 00:15:23.937 "product_name": "Malloc disk", 00:15:23.937 "block_size": 512, 00:15:23.937 "num_blocks": 65536, 00:15:23.937 "uuid": "d09d13e8-045f-47a7-9bb5-789d4627d02a", 00:15:23.937 "assigned_rate_limits": { 00:15:23.937 "rw_ios_per_sec": 0, 00:15:23.937 "rw_mbytes_per_sec": 0, 00:15:23.937 "r_mbytes_per_sec": 0, 00:15:23.937 "w_mbytes_per_sec": 0 00:15:23.937 }, 00:15:23.937 "claimed": true, 00:15:23.937 "claim_type": "exclusive_write", 00:15:23.937 "zoned": false, 00:15:23.937 "supported_io_types": { 00:15:23.937 "read": true, 00:15:23.937 "write": true, 00:15:23.937 "unmap": true, 00:15:23.937 "flush": true, 00:15:23.937 "reset": true, 00:15:23.937 "nvme_admin": false, 00:15:23.937 "nvme_io": false, 00:15:23.937 "nvme_io_md": false, 00:15:23.937 "write_zeroes": true, 00:15:23.937 "zcopy": true, 00:15:23.937 "get_zone_info": false, 00:15:23.937 "zone_management": false, 00:15:23.937 "zone_append": false, 00:15:23.937 "compare": false, 00:15:23.937 "compare_and_write": false, 00:15:23.937 "abort": true, 00:15:23.937 "seek_hole": false, 00:15:23.937 "seek_data": false, 00:15:23.937 "copy": true, 00:15:23.937 "nvme_iov_md": false 00:15:23.937 }, 00:15:23.937 "memory_domains": [ 00:15:23.937 { 00:15:23.937 "dma_device_id": "system", 00:15:23.937 "dma_device_type": 1 00:15:23.937 }, 00:15:23.937 { 00:15:23.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.937 "dma_device_type": 2 00:15:23.937 } 00:15:23.937 ], 00:15:23.937 "driver_specific": {} 00:15:23.937 }' 00:15:23.937 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:23.937 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:23.937 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:23.937 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.195 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.196 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:24.196 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.196 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.196 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:24.196 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:24.196 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:24.196 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:24.196 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:24.196 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:24.196 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:24.454 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:24.454 "name": "BaseBdev2", 00:15:24.454 "aliases": [ 00:15:24.454 "1e997f0d-9eec-4abb-a331-0b3858d0e811" 00:15:24.454 ], 00:15:24.454 "product_name": "Malloc disk", 00:15:24.454 "block_size": 512, 00:15:24.454 "num_blocks": 65536, 00:15:24.454 "uuid": "1e997f0d-9eec-4abb-a331-0b3858d0e811", 00:15:24.454 "assigned_rate_limits": { 00:15:24.454 "rw_ios_per_sec": 0, 00:15:24.454 "rw_mbytes_per_sec": 0, 00:15:24.454 "r_mbytes_per_sec": 0, 00:15:24.454 "w_mbytes_per_sec": 0 00:15:24.454 }, 00:15:24.454 "claimed": true, 00:15:24.454 "claim_type": "exclusive_write", 00:15:24.454 "zoned": false, 00:15:24.454 "supported_io_types": { 00:15:24.454 "read": true, 00:15:24.454 "write": true, 00:15:24.454 "unmap": true, 00:15:24.454 "flush": true, 00:15:24.454 "reset": true, 00:15:24.454 "nvme_admin": false, 00:15:24.454 "nvme_io": false, 00:15:24.454 "nvme_io_md": false, 00:15:24.454 "write_zeroes": true, 00:15:24.454 "zcopy": true, 00:15:24.454 "get_zone_info": false, 00:15:24.454 "zone_management": false, 00:15:24.454 "zone_append": false, 00:15:24.454 "compare": false, 00:15:24.454 "compare_and_write": false, 00:15:24.454 "abort": true, 00:15:24.454 "seek_hole": false, 00:15:24.454 "seek_data": false, 00:15:24.454 "copy": true, 00:15:24.454 "nvme_iov_md": false 00:15:24.454 }, 00:15:24.454 "memory_domains": [ 00:15:24.454 { 00:15:24.454 "dma_device_id": "system", 00:15:24.454 "dma_device_type": 1 00:15:24.454 }, 00:15:24.454 { 00:15:24.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.454 "dma_device_type": 2 00:15:24.454 } 00:15:24.454 ], 00:15:24.454 "driver_specific": {} 00:15:24.454 }' 00:15:24.454 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.454 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.454 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:24.454 13:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.712 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:24.713 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:24.713 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.713 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:24.713 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:24.713 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:24.713 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:24.713 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:24.713 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:24.713 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:24.713 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:24.971 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:24.971 "name": "BaseBdev3", 00:15:24.971 "aliases": [ 00:15:24.971 "d71dddb0-afcc-4fa0-9853-d712a9d1af83" 00:15:24.971 ], 00:15:24.971 "product_name": "Malloc disk", 00:15:24.971 "block_size": 512, 00:15:24.971 "num_blocks": 65536, 00:15:24.971 "uuid": "d71dddb0-afcc-4fa0-9853-d712a9d1af83", 00:15:24.971 "assigned_rate_limits": { 00:15:24.971 "rw_ios_per_sec": 0, 00:15:24.971 "rw_mbytes_per_sec": 0, 00:15:24.971 "r_mbytes_per_sec": 0, 00:15:24.971 "w_mbytes_per_sec": 0 00:15:24.971 }, 00:15:24.971 "claimed": true, 00:15:24.971 "claim_type": "exclusive_write", 00:15:24.971 "zoned": false, 00:15:24.971 "supported_io_types": { 00:15:24.971 "read": true, 00:15:24.971 "write": true, 00:15:24.971 "unmap": true, 00:15:24.971 "flush": true, 00:15:24.971 "reset": true, 00:15:24.971 "nvme_admin": false, 00:15:24.971 "nvme_io": false, 00:15:24.971 "nvme_io_md": false, 00:15:24.971 "write_zeroes": true, 00:15:24.971 "zcopy": true, 00:15:24.971 "get_zone_info": false, 00:15:24.971 "zone_management": false, 00:15:24.971 "zone_append": false, 00:15:24.971 "compare": false, 00:15:24.971 "compare_and_write": false, 00:15:24.971 "abort": true, 00:15:24.971 "seek_hole": false, 00:15:24.971 "seek_data": false, 00:15:24.971 "copy": true, 00:15:24.971 "nvme_iov_md": false 00:15:24.971 }, 00:15:24.971 "memory_domains": [ 00:15:24.971 { 00:15:24.971 "dma_device_id": "system", 00:15:24.971 "dma_device_type": 1 00:15:24.971 }, 00:15:24.971 { 00:15:24.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.971 "dma_device_type": 2 00:15:24.971 } 00:15:24.971 ], 00:15:24.971 "driver_specific": {} 00:15:24.971 }' 00:15:24.971 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:24.971 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:25.230 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:25.230 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:25.230 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:25.230 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:25.230 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:25.230 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:25.230 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:25.230 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:25.230 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:25.230 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:25.230 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:25.489 [2024-07-26 13:15:05.942037] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.489 [2024-07-26 13:15:05.942061] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.489 [2024-07-26 13:15:05.942115] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.489 [2024-07-26 13:15:05.942168] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:25.489 [2024-07-26 13:15:05.942179] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x231e620 name Existed_Raid, state offline 00:15:25.489 13:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 694371 00:15:25.489 13:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 694371 ']' 00:15:25.489 13:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 694371 00:15:25.489 13:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:25.489 13:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.489 13:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 694371 00:15:25.489 13:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.489 13:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.489 13:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 694371' 00:15:25.489 killing process with pid 694371 00:15:25.489 13:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 694371 00:15:25.489 [2024-07-26 13:15:06.014254] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:25.489 13:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 694371 00:15:25.747 [2024-07-26 13:15:06.037858] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.747 13:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:25.747 00:15:25.747 real 0m26.740s 00:15:25.747 user 0m49.051s 00:15:25.747 sys 0m4.784s 00:15:25.747 13:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.747 13:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.747 ************************************ 00:15:25.747 END TEST raid_state_function_test 00:15:25.747 ************************************ 00:15:25.747 13:15:06 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:15:25.747 13:15:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:25.747 13:15:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.747 13:15:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.007 ************************************ 00:15:26.007 START TEST raid_state_function_test_sb 00:15:26.007 ************************************ 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=700018 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 700018' 00:15:26.007 Process raid pid: 700018 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 700018 /var/tmp/spdk-raid.sock 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 700018 ']' 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.007 13:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:26.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:26.008 13:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.008 13:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.008 [2024-07-26 13:15:06.382149] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:15:26.008 [2024-07-26 13:15:06.382211] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:26.008 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:26.008 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:26.008 [2024-07-26 13:15:06.516452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.267 [2024-07-26 13:15:06.604360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.267 [2024-07-26 13:15:06.666916] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.267 [2024-07-26 13:15:06.666952] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.836 13:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.836 13:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:26.836 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:27.096 [2024-07-26 13:15:07.490386] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.096 [2024-07-26 13:15:07.490428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.096 [2024-07-26 13:15:07.490438] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.096 [2024-07-26 13:15:07.490449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.096 [2024-07-26 13:15:07.490457] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:27.096 [2024-07-26 13:15:07.490467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:27.096 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:27.096 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.096 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:27.096 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:27.096 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:27.096 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:27.096 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.096 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.096 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.096 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.096 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.096 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.355 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.355 "name": "Existed_Raid", 00:15:27.355 "uuid": "45bdb37d-f07b-4e5f-a692-859a9b736274", 00:15:27.355 "strip_size_kb": 64, 00:15:27.355 "state": "configuring", 00:15:27.355 "raid_level": "concat", 00:15:27.355 "superblock": true, 00:15:27.355 "num_base_bdevs": 3, 00:15:27.355 "num_base_bdevs_discovered": 0, 00:15:27.355 "num_base_bdevs_operational": 3, 00:15:27.355 "base_bdevs_list": [ 00:15:27.355 { 00:15:27.355 "name": "BaseBdev1", 00:15:27.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.355 "is_configured": false, 00:15:27.355 "data_offset": 0, 00:15:27.355 "data_size": 0 00:15:27.355 }, 00:15:27.355 { 00:15:27.355 "name": "BaseBdev2", 00:15:27.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.355 "is_configured": false, 00:15:27.355 "data_offset": 0, 00:15:27.355 "data_size": 0 00:15:27.355 }, 00:15:27.356 { 00:15:27.356 "name": "BaseBdev3", 00:15:27.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.356 "is_configured": false, 00:15:27.356 "data_offset": 0, 00:15:27.356 "data_size": 0 00:15:27.356 } 00:15:27.356 ] 00:15:27.356 }' 00:15:27.356 13:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.356 13:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.924 13:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:28.183 [2024-07-26 13:15:08.508922] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.183 [2024-07-26 13:15:08.508954] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10f3f40 name Existed_Raid, state configuring 00:15:28.183 13:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:28.443 [2024-07-26 13:15:08.733542] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:28.443 [2024-07-26 13:15:08.733574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:28.443 [2024-07-26 13:15:08.733589] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:28.443 [2024-07-26 13:15:08.733600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:28.443 [2024-07-26 13:15:08.733608] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:28.443 [2024-07-26 13:15:08.733618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:28.443 13:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:28.702 [2024-07-26 13:15:08.971547] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.702 BaseBdev1 00:15:28.702 13:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:28.702 13:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:28.702 13:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:28.702 13:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:28.702 13:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:28.702 13:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:28.702 13:15:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.702 13:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:28.962 [ 00:15:28.962 { 00:15:28.962 "name": "BaseBdev1", 00:15:28.962 "aliases": [ 00:15:28.962 "b69392e6-c131-4a94-82e9-a5354ab4d174" 00:15:28.962 ], 00:15:28.962 "product_name": "Malloc disk", 00:15:28.962 "block_size": 512, 00:15:28.962 "num_blocks": 65536, 00:15:28.962 "uuid": "b69392e6-c131-4a94-82e9-a5354ab4d174", 00:15:28.962 "assigned_rate_limits": { 00:15:28.962 "rw_ios_per_sec": 0, 00:15:28.962 "rw_mbytes_per_sec": 0, 00:15:28.962 "r_mbytes_per_sec": 0, 00:15:28.962 "w_mbytes_per_sec": 0 00:15:28.962 }, 00:15:28.962 "claimed": true, 00:15:28.962 "claim_type": "exclusive_write", 00:15:28.962 "zoned": false, 00:15:28.962 "supported_io_types": { 00:15:28.962 "read": true, 00:15:28.962 "write": true, 00:15:28.962 "unmap": true, 00:15:28.962 "flush": true, 00:15:28.962 "reset": true, 00:15:28.962 "nvme_admin": false, 00:15:28.962 "nvme_io": false, 00:15:28.962 "nvme_io_md": false, 00:15:28.962 "write_zeroes": true, 00:15:28.962 "zcopy": true, 00:15:28.962 "get_zone_info": false, 00:15:28.962 "zone_management": false, 00:15:28.962 "zone_append": false, 00:15:28.962 "compare": false, 00:15:28.962 "compare_and_write": false, 00:15:28.962 "abort": true, 00:15:28.962 "seek_hole": false, 00:15:28.962 "seek_data": false, 00:15:28.962 "copy": true, 00:15:28.962 "nvme_iov_md": false 00:15:28.962 }, 00:15:28.962 "memory_domains": [ 00:15:28.962 { 00:15:28.962 "dma_device_id": "system", 00:15:28.962 "dma_device_type": 1 00:15:28.962 }, 00:15:28.962 { 00:15:28.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.962 "dma_device_type": 2 00:15:28.962 } 00:15:28.962 ], 00:15:28.962 "driver_specific": {} 00:15:28.962 } 00:15:28.962 ] 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.962 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.222 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.222 "name": "Existed_Raid", 00:15:29.222 "uuid": "7ae2a616-9bb6-4e98-a7bd-660b67f457b9", 00:15:29.222 "strip_size_kb": 64, 00:15:29.222 "state": "configuring", 00:15:29.222 "raid_level": "concat", 00:15:29.222 "superblock": true, 00:15:29.222 "num_base_bdevs": 3, 00:15:29.222 "num_base_bdevs_discovered": 1, 00:15:29.222 "num_base_bdevs_operational": 3, 00:15:29.222 "base_bdevs_list": [ 00:15:29.222 { 00:15:29.222 "name": "BaseBdev1", 00:15:29.222 "uuid": "b69392e6-c131-4a94-82e9-a5354ab4d174", 00:15:29.222 "is_configured": true, 00:15:29.222 "data_offset": 2048, 00:15:29.222 "data_size": 63488 00:15:29.222 }, 00:15:29.222 { 00:15:29.222 "name": "BaseBdev2", 00:15:29.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.222 "is_configured": false, 00:15:29.222 "data_offset": 0, 00:15:29.222 "data_size": 0 00:15:29.222 }, 00:15:29.222 { 00:15:29.222 "name": "BaseBdev3", 00:15:29.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.222 "is_configured": false, 00:15:29.222 "data_offset": 0, 00:15:29.222 "data_size": 0 00:15:29.222 } 00:15:29.222 ] 00:15:29.222 }' 00:15:29.222 13:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.222 13:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.791 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:30.056 [2024-07-26 13:15:10.419356] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.056 [2024-07-26 13:15:10.419392] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10f3810 name Existed_Raid, state configuring 00:15:30.056 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:30.316 [2024-07-26 13:15:10.643980] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.316 [2024-07-26 13:15:10.645366] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.316 [2024-07-26 13:15:10.645396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.316 [2024-07-26 13:15:10.645405] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:30.316 [2024-07-26 13:15:10.645415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.316 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.575 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.575 "name": "Existed_Raid", 00:15:30.575 "uuid": "a52a9356-dba3-4479-8284-797c7f622cc0", 00:15:30.575 "strip_size_kb": 64, 00:15:30.575 "state": "configuring", 00:15:30.575 "raid_level": "concat", 00:15:30.575 "superblock": true, 00:15:30.575 "num_base_bdevs": 3, 00:15:30.575 "num_base_bdevs_discovered": 1, 00:15:30.575 "num_base_bdevs_operational": 3, 00:15:30.575 "base_bdevs_list": [ 00:15:30.575 { 00:15:30.575 "name": "BaseBdev1", 00:15:30.575 "uuid": "b69392e6-c131-4a94-82e9-a5354ab4d174", 00:15:30.576 "is_configured": true, 00:15:30.576 "data_offset": 2048, 00:15:30.576 "data_size": 63488 00:15:30.576 }, 00:15:30.576 { 00:15:30.576 "name": "BaseBdev2", 00:15:30.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.576 "is_configured": false, 00:15:30.576 "data_offset": 0, 00:15:30.576 "data_size": 0 00:15:30.576 }, 00:15:30.576 { 00:15:30.576 "name": "BaseBdev3", 00:15:30.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.576 "is_configured": false, 00:15:30.576 "data_offset": 0, 00:15:30.576 "data_size": 0 00:15:30.576 } 00:15:30.576 ] 00:15:30.576 }' 00:15:30.576 13:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.576 13:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.144 13:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:31.403 [2024-07-26 13:15:11.685840] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:31.403 BaseBdev2 00:15:31.403 13:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:31.403 13:15:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:31.403 13:15:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:31.403 13:15:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:31.403 13:15:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:31.403 13:15:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:31.403 13:15:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.662 13:15:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:31.662 [ 00:15:31.662 { 00:15:31.662 "name": "BaseBdev2", 00:15:31.662 "aliases": [ 00:15:31.662 "62178133-ec90-429a-9a3e-a7564750ef4f" 00:15:31.662 ], 00:15:31.662 "product_name": "Malloc disk", 00:15:31.662 "block_size": 512, 00:15:31.662 "num_blocks": 65536, 00:15:31.662 "uuid": "62178133-ec90-429a-9a3e-a7564750ef4f", 00:15:31.662 "assigned_rate_limits": { 00:15:31.662 "rw_ios_per_sec": 0, 00:15:31.662 "rw_mbytes_per_sec": 0, 00:15:31.662 "r_mbytes_per_sec": 0, 00:15:31.662 "w_mbytes_per_sec": 0 00:15:31.662 }, 00:15:31.662 "claimed": true, 00:15:31.662 "claim_type": "exclusive_write", 00:15:31.662 "zoned": false, 00:15:31.662 "supported_io_types": { 00:15:31.662 "read": true, 00:15:31.662 "write": true, 00:15:31.662 "unmap": true, 00:15:31.662 "flush": true, 00:15:31.662 "reset": true, 00:15:31.662 "nvme_admin": false, 00:15:31.662 "nvme_io": false, 00:15:31.662 "nvme_io_md": false, 00:15:31.662 "write_zeroes": true, 00:15:31.662 "zcopy": true, 00:15:31.662 "get_zone_info": false, 00:15:31.662 "zone_management": false, 00:15:31.662 "zone_append": false, 00:15:31.662 "compare": false, 00:15:31.662 "compare_and_write": false, 00:15:31.662 "abort": true, 00:15:31.662 "seek_hole": false, 00:15:31.662 "seek_data": false, 00:15:31.662 "copy": true, 00:15:31.662 "nvme_iov_md": false 00:15:31.662 }, 00:15:31.662 "memory_domains": [ 00:15:31.662 { 00:15:31.662 "dma_device_id": "system", 00:15:31.662 "dma_device_type": 1 00:15:31.662 }, 00:15:31.662 { 00:15:31.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.662 "dma_device_type": 2 00:15:31.662 } 00:15:31.662 ], 00:15:31.662 "driver_specific": {} 00:15:31.662 } 00:15:31.662 ] 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.662 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.925 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.925 "name": "Existed_Raid", 00:15:31.925 "uuid": "a52a9356-dba3-4479-8284-797c7f622cc0", 00:15:31.925 "strip_size_kb": 64, 00:15:31.925 "state": "configuring", 00:15:31.925 "raid_level": "concat", 00:15:31.925 "superblock": true, 00:15:31.925 "num_base_bdevs": 3, 00:15:31.925 "num_base_bdevs_discovered": 2, 00:15:31.925 "num_base_bdevs_operational": 3, 00:15:31.925 "base_bdevs_list": [ 00:15:31.925 { 00:15:31.925 "name": "BaseBdev1", 00:15:31.925 "uuid": "b69392e6-c131-4a94-82e9-a5354ab4d174", 00:15:31.925 "is_configured": true, 00:15:31.925 "data_offset": 2048, 00:15:31.925 "data_size": 63488 00:15:31.925 }, 00:15:31.925 { 00:15:31.925 "name": "BaseBdev2", 00:15:31.925 "uuid": "62178133-ec90-429a-9a3e-a7564750ef4f", 00:15:31.925 "is_configured": true, 00:15:31.925 "data_offset": 2048, 00:15:31.925 "data_size": 63488 00:15:31.925 }, 00:15:31.925 { 00:15:31.925 "name": "BaseBdev3", 00:15:31.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.925 "is_configured": false, 00:15:31.925 "data_offset": 0, 00:15:31.925 "data_size": 0 00:15:31.925 } 00:15:31.925 ] 00:15:31.925 }' 00:15:31.925 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.925 13:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.545 13:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:32.804 [2024-07-26 13:15:13.177009] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.804 [2024-07-26 13:15:13.177168] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x10f4710 00:15:32.804 [2024-07-26 13:15:13.177181] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:32.804 [2024-07-26 13:15:13.177345] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10f43e0 00:15:32.804 [2024-07-26 13:15:13.177453] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10f4710 00:15:32.804 [2024-07-26 13:15:13.177463] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10f4710 00:15:32.804 [2024-07-26 13:15:13.177550] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.804 BaseBdev3 00:15:32.804 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:32.804 13:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:32.804 13:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:32.804 13:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:32.804 13:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:32.804 13:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:32.804 13:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.063 13:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:33.322 [ 00:15:33.322 { 00:15:33.322 "name": "BaseBdev3", 00:15:33.322 "aliases": [ 00:15:33.322 "7c2b7bdd-b621-470e-aa90-d6bea03ac3b5" 00:15:33.322 ], 00:15:33.322 "product_name": "Malloc disk", 00:15:33.322 "block_size": 512, 00:15:33.322 "num_blocks": 65536, 00:15:33.322 "uuid": "7c2b7bdd-b621-470e-aa90-d6bea03ac3b5", 00:15:33.322 "assigned_rate_limits": { 00:15:33.322 "rw_ios_per_sec": 0, 00:15:33.322 "rw_mbytes_per_sec": 0, 00:15:33.322 "r_mbytes_per_sec": 0, 00:15:33.322 "w_mbytes_per_sec": 0 00:15:33.322 }, 00:15:33.322 "claimed": true, 00:15:33.322 "claim_type": "exclusive_write", 00:15:33.322 "zoned": false, 00:15:33.322 "supported_io_types": { 00:15:33.322 "read": true, 00:15:33.322 "write": true, 00:15:33.322 "unmap": true, 00:15:33.322 "flush": true, 00:15:33.322 "reset": true, 00:15:33.322 "nvme_admin": false, 00:15:33.322 "nvme_io": false, 00:15:33.322 "nvme_io_md": false, 00:15:33.322 "write_zeroes": true, 00:15:33.322 "zcopy": true, 00:15:33.322 "get_zone_info": false, 00:15:33.322 "zone_management": false, 00:15:33.322 "zone_append": false, 00:15:33.322 "compare": false, 00:15:33.322 "compare_and_write": false, 00:15:33.322 "abort": true, 00:15:33.322 "seek_hole": false, 00:15:33.322 "seek_data": false, 00:15:33.322 "copy": true, 00:15:33.322 "nvme_iov_md": false 00:15:33.322 }, 00:15:33.322 "memory_domains": [ 00:15:33.322 { 00:15:33.322 "dma_device_id": "system", 00:15:33.322 "dma_device_type": 1 00:15:33.322 }, 00:15:33.322 { 00:15:33.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.322 "dma_device_type": 2 00:15:33.322 } 00:15:33.322 ], 00:15:33.322 "driver_specific": {} 00:15:33.322 } 00:15:33.322 ] 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.322 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.581 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.581 "name": "Existed_Raid", 00:15:33.581 "uuid": "a52a9356-dba3-4479-8284-797c7f622cc0", 00:15:33.581 "strip_size_kb": 64, 00:15:33.581 "state": "online", 00:15:33.581 "raid_level": "concat", 00:15:33.581 "superblock": true, 00:15:33.581 "num_base_bdevs": 3, 00:15:33.581 "num_base_bdevs_discovered": 3, 00:15:33.581 "num_base_bdevs_operational": 3, 00:15:33.581 "base_bdevs_list": [ 00:15:33.581 { 00:15:33.581 "name": "BaseBdev1", 00:15:33.581 "uuid": "b69392e6-c131-4a94-82e9-a5354ab4d174", 00:15:33.581 "is_configured": true, 00:15:33.581 "data_offset": 2048, 00:15:33.581 "data_size": 63488 00:15:33.581 }, 00:15:33.581 { 00:15:33.581 "name": "BaseBdev2", 00:15:33.581 "uuid": "62178133-ec90-429a-9a3e-a7564750ef4f", 00:15:33.581 "is_configured": true, 00:15:33.581 "data_offset": 2048, 00:15:33.581 "data_size": 63488 00:15:33.581 }, 00:15:33.581 { 00:15:33.581 "name": "BaseBdev3", 00:15:33.581 "uuid": "7c2b7bdd-b621-470e-aa90-d6bea03ac3b5", 00:15:33.581 "is_configured": true, 00:15:33.581 "data_offset": 2048, 00:15:33.581 "data_size": 63488 00:15:33.581 } 00:15:33.581 ] 00:15:33.581 }' 00:15:33.581 13:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.581 13:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.148 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:34.148 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:34.148 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:34.148 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:34.148 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:34.148 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:34.148 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:34.148 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:34.148 [2024-07-26 13:15:14.572950] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.148 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:34.148 "name": "Existed_Raid", 00:15:34.148 "aliases": [ 00:15:34.148 "a52a9356-dba3-4479-8284-797c7f622cc0" 00:15:34.148 ], 00:15:34.148 "product_name": "Raid Volume", 00:15:34.148 "block_size": 512, 00:15:34.148 "num_blocks": 190464, 00:15:34.148 "uuid": "a52a9356-dba3-4479-8284-797c7f622cc0", 00:15:34.148 "assigned_rate_limits": { 00:15:34.148 "rw_ios_per_sec": 0, 00:15:34.148 "rw_mbytes_per_sec": 0, 00:15:34.148 "r_mbytes_per_sec": 0, 00:15:34.148 "w_mbytes_per_sec": 0 00:15:34.148 }, 00:15:34.148 "claimed": false, 00:15:34.148 "zoned": false, 00:15:34.148 "supported_io_types": { 00:15:34.148 "read": true, 00:15:34.148 "write": true, 00:15:34.148 "unmap": true, 00:15:34.148 "flush": true, 00:15:34.148 "reset": true, 00:15:34.148 "nvme_admin": false, 00:15:34.148 "nvme_io": false, 00:15:34.148 "nvme_io_md": false, 00:15:34.148 "write_zeroes": true, 00:15:34.148 "zcopy": false, 00:15:34.148 "get_zone_info": false, 00:15:34.148 "zone_management": false, 00:15:34.148 "zone_append": false, 00:15:34.148 "compare": false, 00:15:34.148 "compare_and_write": false, 00:15:34.148 "abort": false, 00:15:34.148 "seek_hole": false, 00:15:34.148 "seek_data": false, 00:15:34.148 "copy": false, 00:15:34.148 "nvme_iov_md": false 00:15:34.148 }, 00:15:34.148 "memory_domains": [ 00:15:34.148 { 00:15:34.149 "dma_device_id": "system", 00:15:34.149 "dma_device_type": 1 00:15:34.149 }, 00:15:34.149 { 00:15:34.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.149 "dma_device_type": 2 00:15:34.149 }, 00:15:34.149 { 00:15:34.149 "dma_device_id": "system", 00:15:34.149 "dma_device_type": 1 00:15:34.149 }, 00:15:34.149 { 00:15:34.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.149 "dma_device_type": 2 00:15:34.149 }, 00:15:34.149 { 00:15:34.149 "dma_device_id": "system", 00:15:34.149 "dma_device_type": 1 00:15:34.149 }, 00:15:34.149 { 00:15:34.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.149 "dma_device_type": 2 00:15:34.149 } 00:15:34.149 ], 00:15:34.149 "driver_specific": { 00:15:34.149 "raid": { 00:15:34.149 "uuid": "a52a9356-dba3-4479-8284-797c7f622cc0", 00:15:34.149 "strip_size_kb": 64, 00:15:34.149 "state": "online", 00:15:34.149 "raid_level": "concat", 00:15:34.149 "superblock": true, 00:15:34.149 "num_base_bdevs": 3, 00:15:34.149 "num_base_bdevs_discovered": 3, 00:15:34.149 "num_base_bdevs_operational": 3, 00:15:34.149 "base_bdevs_list": [ 00:15:34.149 { 00:15:34.149 "name": "BaseBdev1", 00:15:34.149 "uuid": "b69392e6-c131-4a94-82e9-a5354ab4d174", 00:15:34.149 "is_configured": true, 00:15:34.149 "data_offset": 2048, 00:15:34.149 "data_size": 63488 00:15:34.149 }, 00:15:34.149 { 00:15:34.149 "name": "BaseBdev2", 00:15:34.149 "uuid": "62178133-ec90-429a-9a3e-a7564750ef4f", 00:15:34.149 "is_configured": true, 00:15:34.149 "data_offset": 2048, 00:15:34.149 "data_size": 63488 00:15:34.149 }, 00:15:34.149 { 00:15:34.149 "name": "BaseBdev3", 00:15:34.149 "uuid": "7c2b7bdd-b621-470e-aa90-d6bea03ac3b5", 00:15:34.149 "is_configured": true, 00:15:34.149 "data_offset": 2048, 00:15:34.149 "data_size": 63488 00:15:34.149 } 00:15:34.149 ] 00:15:34.149 } 00:15:34.149 } 00:15:34.149 }' 00:15:34.149 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.149 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:34.149 BaseBdev2 00:15:34.149 BaseBdev3' 00:15:34.149 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.149 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:34.149 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:34.408 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:34.408 "name": "BaseBdev1", 00:15:34.408 "aliases": [ 00:15:34.408 "b69392e6-c131-4a94-82e9-a5354ab4d174" 00:15:34.408 ], 00:15:34.408 "product_name": "Malloc disk", 00:15:34.408 "block_size": 512, 00:15:34.408 "num_blocks": 65536, 00:15:34.408 "uuid": "b69392e6-c131-4a94-82e9-a5354ab4d174", 00:15:34.408 "assigned_rate_limits": { 00:15:34.408 "rw_ios_per_sec": 0, 00:15:34.408 "rw_mbytes_per_sec": 0, 00:15:34.408 "r_mbytes_per_sec": 0, 00:15:34.408 "w_mbytes_per_sec": 0 00:15:34.408 }, 00:15:34.408 "claimed": true, 00:15:34.408 "claim_type": "exclusive_write", 00:15:34.408 "zoned": false, 00:15:34.408 "supported_io_types": { 00:15:34.408 "read": true, 00:15:34.408 "write": true, 00:15:34.408 "unmap": true, 00:15:34.408 "flush": true, 00:15:34.408 "reset": true, 00:15:34.408 "nvme_admin": false, 00:15:34.408 "nvme_io": false, 00:15:34.408 "nvme_io_md": false, 00:15:34.408 "write_zeroes": true, 00:15:34.408 "zcopy": true, 00:15:34.408 "get_zone_info": false, 00:15:34.408 "zone_management": false, 00:15:34.408 "zone_append": false, 00:15:34.408 "compare": false, 00:15:34.408 "compare_and_write": false, 00:15:34.408 "abort": true, 00:15:34.408 "seek_hole": false, 00:15:34.408 "seek_data": false, 00:15:34.408 "copy": true, 00:15:34.408 "nvme_iov_md": false 00:15:34.408 }, 00:15:34.408 "memory_domains": [ 00:15:34.408 { 00:15:34.408 "dma_device_id": "system", 00:15:34.408 "dma_device_type": 1 00:15:34.408 }, 00:15:34.408 { 00:15:34.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.408 "dma_device_type": 2 00:15:34.408 } 00:15:34.408 ], 00:15:34.408 "driver_specific": {} 00:15:34.408 }' 00:15:34.408 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.408 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.408 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:34.408 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.667 13:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:34.667 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:34.667 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.667 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:34.667 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:34.667 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.667 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:34.667 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:34.667 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:34.667 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:34.667 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:34.926 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:34.926 "name": "BaseBdev2", 00:15:34.926 "aliases": [ 00:15:34.926 "62178133-ec90-429a-9a3e-a7564750ef4f" 00:15:34.926 ], 00:15:34.926 "product_name": "Malloc disk", 00:15:34.926 "block_size": 512, 00:15:34.926 "num_blocks": 65536, 00:15:34.926 "uuid": "62178133-ec90-429a-9a3e-a7564750ef4f", 00:15:34.926 "assigned_rate_limits": { 00:15:34.926 "rw_ios_per_sec": 0, 00:15:34.926 "rw_mbytes_per_sec": 0, 00:15:34.926 "r_mbytes_per_sec": 0, 00:15:34.926 "w_mbytes_per_sec": 0 00:15:34.926 }, 00:15:34.926 "claimed": true, 00:15:34.926 "claim_type": "exclusive_write", 00:15:34.926 "zoned": false, 00:15:34.926 "supported_io_types": { 00:15:34.926 "read": true, 00:15:34.926 "write": true, 00:15:34.926 "unmap": true, 00:15:34.926 "flush": true, 00:15:34.926 "reset": true, 00:15:34.926 "nvme_admin": false, 00:15:34.926 "nvme_io": false, 00:15:34.926 "nvme_io_md": false, 00:15:34.926 "write_zeroes": true, 00:15:34.926 "zcopy": true, 00:15:34.926 "get_zone_info": false, 00:15:34.926 "zone_management": false, 00:15:34.926 "zone_append": false, 00:15:34.926 "compare": false, 00:15:34.926 "compare_and_write": false, 00:15:34.926 "abort": true, 00:15:34.926 "seek_hole": false, 00:15:34.926 "seek_data": false, 00:15:34.926 "copy": true, 00:15:34.926 "nvme_iov_md": false 00:15:34.926 }, 00:15:34.926 "memory_domains": [ 00:15:34.926 { 00:15:34.926 "dma_device_id": "system", 00:15:34.926 "dma_device_type": 1 00:15:34.926 }, 00:15:34.926 { 00:15:34.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.926 "dma_device_type": 2 00:15:34.926 } 00:15:34.926 ], 00:15:34.926 "driver_specific": {} 00:15:34.926 }' 00:15:34.926 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.926 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:34.926 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:34.926 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.185 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.185 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:35.185 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.185 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.185 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:35.185 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.185 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.185 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.185 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:35.185 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:35.185 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:35.444 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:35.444 "name": "BaseBdev3", 00:15:35.444 "aliases": [ 00:15:35.444 "7c2b7bdd-b621-470e-aa90-d6bea03ac3b5" 00:15:35.444 ], 00:15:35.444 "product_name": "Malloc disk", 00:15:35.444 "block_size": 512, 00:15:35.444 "num_blocks": 65536, 00:15:35.444 "uuid": "7c2b7bdd-b621-470e-aa90-d6bea03ac3b5", 00:15:35.444 "assigned_rate_limits": { 00:15:35.444 "rw_ios_per_sec": 0, 00:15:35.444 "rw_mbytes_per_sec": 0, 00:15:35.444 "r_mbytes_per_sec": 0, 00:15:35.444 "w_mbytes_per_sec": 0 00:15:35.444 }, 00:15:35.444 "claimed": true, 00:15:35.444 "claim_type": "exclusive_write", 00:15:35.444 "zoned": false, 00:15:35.444 "supported_io_types": { 00:15:35.444 "read": true, 00:15:35.444 "write": true, 00:15:35.444 "unmap": true, 00:15:35.444 "flush": true, 00:15:35.444 "reset": true, 00:15:35.444 "nvme_admin": false, 00:15:35.444 "nvme_io": false, 00:15:35.444 "nvme_io_md": false, 00:15:35.444 "write_zeroes": true, 00:15:35.444 "zcopy": true, 00:15:35.444 "get_zone_info": false, 00:15:35.444 "zone_management": false, 00:15:35.444 "zone_append": false, 00:15:35.444 "compare": false, 00:15:35.444 "compare_and_write": false, 00:15:35.444 "abort": true, 00:15:35.444 "seek_hole": false, 00:15:35.444 "seek_data": false, 00:15:35.444 "copy": true, 00:15:35.444 "nvme_iov_md": false 00:15:35.444 }, 00:15:35.444 "memory_domains": [ 00:15:35.444 { 00:15:35.444 "dma_device_id": "system", 00:15:35.444 "dma_device_type": 1 00:15:35.444 }, 00:15:35.444 { 00:15:35.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.444 "dma_device_type": 2 00:15:35.444 } 00:15:35.444 ], 00:15:35.444 "driver_specific": {} 00:15:35.444 }' 00:15:35.444 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.444 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:35.702 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:35.702 13:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.702 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:35.702 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:35.702 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.702 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:35.702 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:35.702 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.702 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:35.961 [2024-07-26 13:15:16.433628] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.961 [2024-07-26 13:15:16.433651] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.961 [2024-07-26 13:15:16.433688] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.961 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.220 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.220 "name": "Existed_Raid", 00:15:36.220 "uuid": "a52a9356-dba3-4479-8284-797c7f622cc0", 00:15:36.220 "strip_size_kb": 64, 00:15:36.220 "state": "offline", 00:15:36.220 "raid_level": "concat", 00:15:36.220 "superblock": true, 00:15:36.220 "num_base_bdevs": 3, 00:15:36.220 "num_base_bdevs_discovered": 2, 00:15:36.220 "num_base_bdevs_operational": 2, 00:15:36.220 "base_bdevs_list": [ 00:15:36.220 { 00:15:36.220 "name": null, 00:15:36.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.220 "is_configured": false, 00:15:36.220 "data_offset": 2048, 00:15:36.220 "data_size": 63488 00:15:36.220 }, 00:15:36.220 { 00:15:36.220 "name": "BaseBdev2", 00:15:36.220 "uuid": "62178133-ec90-429a-9a3e-a7564750ef4f", 00:15:36.220 "is_configured": true, 00:15:36.220 "data_offset": 2048, 00:15:36.220 "data_size": 63488 00:15:36.220 }, 00:15:36.220 { 00:15:36.220 "name": "BaseBdev3", 00:15:36.220 "uuid": "7c2b7bdd-b621-470e-aa90-d6bea03ac3b5", 00:15:36.220 "is_configured": true, 00:15:36.220 "data_offset": 2048, 00:15:36.220 "data_size": 63488 00:15:36.220 } 00:15:36.220 ] 00:15:36.220 }' 00:15:36.220 13:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.220 13:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.788 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:36.788 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:36.788 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.788 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:37.048 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:37.048 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.048 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:37.048 [2024-07-26 13:15:17.545542] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.048 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:37.048 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:37.048 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.048 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:37.307 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:37.307 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.307 13:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:37.875 [2024-07-26 13:15:18.209657] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.875 [2024-07-26 13:15:18.209696] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10f4710 name Existed_Raid, state offline 00:15:37.875 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:37.875 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:37.875 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.875 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.135 BaseBdev2 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:38.135 13:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.394 13:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.653 [ 00:15:38.653 { 00:15:38.653 "name": "BaseBdev2", 00:15:38.653 "aliases": [ 00:15:38.653 "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225" 00:15:38.653 ], 00:15:38.653 "product_name": "Malloc disk", 00:15:38.653 "block_size": 512, 00:15:38.653 "num_blocks": 65536, 00:15:38.653 "uuid": "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225", 00:15:38.653 "assigned_rate_limits": { 00:15:38.653 "rw_ios_per_sec": 0, 00:15:38.653 "rw_mbytes_per_sec": 0, 00:15:38.653 "r_mbytes_per_sec": 0, 00:15:38.653 "w_mbytes_per_sec": 0 00:15:38.653 }, 00:15:38.653 "claimed": false, 00:15:38.653 "zoned": false, 00:15:38.653 "supported_io_types": { 00:15:38.653 "read": true, 00:15:38.653 "write": true, 00:15:38.653 "unmap": true, 00:15:38.653 "flush": true, 00:15:38.653 "reset": true, 00:15:38.653 "nvme_admin": false, 00:15:38.653 "nvme_io": false, 00:15:38.653 "nvme_io_md": false, 00:15:38.653 "write_zeroes": true, 00:15:38.653 "zcopy": true, 00:15:38.653 "get_zone_info": false, 00:15:38.653 "zone_management": false, 00:15:38.653 "zone_append": false, 00:15:38.653 "compare": false, 00:15:38.653 "compare_and_write": false, 00:15:38.653 "abort": true, 00:15:38.653 "seek_hole": false, 00:15:38.653 "seek_data": false, 00:15:38.653 "copy": true, 00:15:38.653 "nvme_iov_md": false 00:15:38.653 }, 00:15:38.653 "memory_domains": [ 00:15:38.653 { 00:15:38.653 "dma_device_id": "system", 00:15:38.653 "dma_device_type": 1 00:15:38.653 }, 00:15:38.653 { 00:15:38.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.653 "dma_device_type": 2 00:15:38.653 } 00:15:38.653 ], 00:15:38.653 "driver_specific": {} 00:15:38.653 } 00:15:38.653 ] 00:15:38.653 13:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:38.653 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:38.653 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:38.653 13:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:38.653 BaseBdev3 00:15:38.653 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:38.653 13:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:38.653 13:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:38.653 13:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:38.653 13:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:38.653 13:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:38.653 13:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.912 13:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:39.171 [ 00:15:39.171 { 00:15:39.171 "name": "BaseBdev3", 00:15:39.171 "aliases": [ 00:15:39.171 "f21bbb66-09df-41ab-b851-c8c5e1c9161b" 00:15:39.171 ], 00:15:39.171 "product_name": "Malloc disk", 00:15:39.171 "block_size": 512, 00:15:39.171 "num_blocks": 65536, 00:15:39.171 "uuid": "f21bbb66-09df-41ab-b851-c8c5e1c9161b", 00:15:39.171 "assigned_rate_limits": { 00:15:39.171 "rw_ios_per_sec": 0, 00:15:39.171 "rw_mbytes_per_sec": 0, 00:15:39.171 "r_mbytes_per_sec": 0, 00:15:39.171 "w_mbytes_per_sec": 0 00:15:39.171 }, 00:15:39.171 "claimed": false, 00:15:39.171 "zoned": false, 00:15:39.171 "supported_io_types": { 00:15:39.171 "read": true, 00:15:39.171 "write": true, 00:15:39.171 "unmap": true, 00:15:39.171 "flush": true, 00:15:39.171 "reset": true, 00:15:39.171 "nvme_admin": false, 00:15:39.171 "nvme_io": false, 00:15:39.171 "nvme_io_md": false, 00:15:39.171 "write_zeroes": true, 00:15:39.171 "zcopy": true, 00:15:39.171 "get_zone_info": false, 00:15:39.171 "zone_management": false, 00:15:39.171 "zone_append": false, 00:15:39.171 "compare": false, 00:15:39.171 "compare_and_write": false, 00:15:39.171 "abort": true, 00:15:39.171 "seek_hole": false, 00:15:39.171 "seek_data": false, 00:15:39.171 "copy": true, 00:15:39.171 "nvme_iov_md": false 00:15:39.171 }, 00:15:39.171 "memory_domains": [ 00:15:39.171 { 00:15:39.171 "dma_device_id": "system", 00:15:39.171 "dma_device_type": 1 00:15:39.171 }, 00:15:39.171 { 00:15:39.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.171 "dma_device_type": 2 00:15:39.171 } 00:15:39.171 ], 00:15:39.172 "driver_specific": {} 00:15:39.172 } 00:15:39.172 ] 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:39.172 [2024-07-26 13:15:19.668066] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.172 [2024-07-26 13:15:19.668104] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.172 [2024-07-26 13:15:19.668122] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.172 [2024-07-26 13:15:19.669360] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.172 13:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.740 13:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.740 "name": "Existed_Raid", 00:15:39.740 "uuid": "46c8e153-956e-4e91-ac11-a30df7dce384", 00:15:39.740 "strip_size_kb": 64, 00:15:39.740 "state": "configuring", 00:15:39.740 "raid_level": "concat", 00:15:39.740 "superblock": true, 00:15:39.740 "num_base_bdevs": 3, 00:15:39.740 "num_base_bdevs_discovered": 2, 00:15:39.740 "num_base_bdevs_operational": 3, 00:15:39.740 "base_bdevs_list": [ 00:15:39.740 { 00:15:39.740 "name": "BaseBdev1", 00:15:39.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.740 "is_configured": false, 00:15:39.740 "data_offset": 0, 00:15:39.740 "data_size": 0 00:15:39.740 }, 00:15:39.740 { 00:15:39.740 "name": "BaseBdev2", 00:15:39.740 "uuid": "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225", 00:15:39.740 "is_configured": true, 00:15:39.740 "data_offset": 2048, 00:15:39.740 "data_size": 63488 00:15:39.740 }, 00:15:39.740 { 00:15:39.740 "name": "BaseBdev3", 00:15:39.740 "uuid": "f21bbb66-09df-41ab-b851-c8c5e1c9161b", 00:15:39.740 "is_configured": true, 00:15:39.740 "data_offset": 2048, 00:15:39.740 "data_size": 63488 00:15:39.740 } 00:15:39.740 ] 00:15:39.740 }' 00:15:39.740 13:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.740 13:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.307 13:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:40.566 [2024-07-26 13:15:20.987496] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:40.566 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:40.566 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:40.566 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:40.566 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:40.566 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:40.566 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:40.566 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.566 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.566 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.566 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.566 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.566 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.825 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.825 "name": "Existed_Raid", 00:15:40.825 "uuid": "46c8e153-956e-4e91-ac11-a30df7dce384", 00:15:40.825 "strip_size_kb": 64, 00:15:40.825 "state": "configuring", 00:15:40.825 "raid_level": "concat", 00:15:40.825 "superblock": true, 00:15:40.825 "num_base_bdevs": 3, 00:15:40.825 "num_base_bdevs_discovered": 1, 00:15:40.825 "num_base_bdevs_operational": 3, 00:15:40.825 "base_bdevs_list": [ 00:15:40.825 { 00:15:40.825 "name": "BaseBdev1", 00:15:40.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.825 "is_configured": false, 00:15:40.825 "data_offset": 0, 00:15:40.825 "data_size": 0 00:15:40.825 }, 00:15:40.825 { 00:15:40.825 "name": null, 00:15:40.825 "uuid": "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225", 00:15:40.825 "is_configured": false, 00:15:40.825 "data_offset": 2048, 00:15:40.825 "data_size": 63488 00:15:40.825 }, 00:15:40.825 { 00:15:40.825 "name": "BaseBdev3", 00:15:40.825 "uuid": "f21bbb66-09df-41ab-b851-c8c5e1c9161b", 00:15:40.825 "is_configured": true, 00:15:40.825 "data_offset": 2048, 00:15:40.825 "data_size": 63488 00:15:40.825 } 00:15:40.825 ] 00:15:40.825 }' 00:15:40.825 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.825 13:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.393 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.393 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:41.393 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:41.393 13:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:41.652 [2024-07-26 13:15:22.121596] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.652 BaseBdev1 00:15:41.652 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:41.652 13:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:41.652 13:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:41.652 13:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:41.652 13:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:41.652 13:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:41.652 13:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.912 13:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:42.170 [ 00:15:42.170 { 00:15:42.170 "name": "BaseBdev1", 00:15:42.170 "aliases": [ 00:15:42.170 "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890" 00:15:42.170 ], 00:15:42.170 "product_name": "Malloc disk", 00:15:42.170 "block_size": 512, 00:15:42.170 "num_blocks": 65536, 00:15:42.170 "uuid": "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890", 00:15:42.170 "assigned_rate_limits": { 00:15:42.170 "rw_ios_per_sec": 0, 00:15:42.170 "rw_mbytes_per_sec": 0, 00:15:42.170 "r_mbytes_per_sec": 0, 00:15:42.170 "w_mbytes_per_sec": 0 00:15:42.170 }, 00:15:42.170 "claimed": true, 00:15:42.170 "claim_type": "exclusive_write", 00:15:42.170 "zoned": false, 00:15:42.170 "supported_io_types": { 00:15:42.170 "read": true, 00:15:42.170 "write": true, 00:15:42.170 "unmap": true, 00:15:42.170 "flush": true, 00:15:42.170 "reset": true, 00:15:42.170 "nvme_admin": false, 00:15:42.170 "nvme_io": false, 00:15:42.170 "nvme_io_md": false, 00:15:42.170 "write_zeroes": true, 00:15:42.170 "zcopy": true, 00:15:42.170 "get_zone_info": false, 00:15:42.170 "zone_management": false, 00:15:42.170 "zone_append": false, 00:15:42.170 "compare": false, 00:15:42.170 "compare_and_write": false, 00:15:42.170 "abort": true, 00:15:42.170 "seek_hole": false, 00:15:42.170 "seek_data": false, 00:15:42.170 "copy": true, 00:15:42.170 "nvme_iov_md": false 00:15:42.170 }, 00:15:42.170 "memory_domains": [ 00:15:42.170 { 00:15:42.170 "dma_device_id": "system", 00:15:42.170 "dma_device_type": 1 00:15:42.170 }, 00:15:42.170 { 00:15:42.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.170 "dma_device_type": 2 00:15:42.170 } 00:15:42.170 ], 00:15:42.170 "driver_specific": {} 00:15:42.170 } 00:15:42.170 ] 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.170 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.429 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:42.429 "name": "Existed_Raid", 00:15:42.429 "uuid": "46c8e153-956e-4e91-ac11-a30df7dce384", 00:15:42.429 "strip_size_kb": 64, 00:15:42.429 "state": "configuring", 00:15:42.429 "raid_level": "concat", 00:15:42.429 "superblock": true, 00:15:42.429 "num_base_bdevs": 3, 00:15:42.429 "num_base_bdevs_discovered": 2, 00:15:42.429 "num_base_bdevs_operational": 3, 00:15:42.429 "base_bdevs_list": [ 00:15:42.429 { 00:15:42.429 "name": "BaseBdev1", 00:15:42.429 "uuid": "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890", 00:15:42.429 "is_configured": true, 00:15:42.429 "data_offset": 2048, 00:15:42.429 "data_size": 63488 00:15:42.429 }, 00:15:42.429 { 00:15:42.429 "name": null, 00:15:42.429 "uuid": "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225", 00:15:42.429 "is_configured": false, 00:15:42.429 "data_offset": 2048, 00:15:42.429 "data_size": 63488 00:15:42.429 }, 00:15:42.430 { 00:15:42.430 "name": "BaseBdev3", 00:15:42.430 "uuid": "f21bbb66-09df-41ab-b851-c8c5e1c9161b", 00:15:42.430 "is_configured": true, 00:15:42.430 "data_offset": 2048, 00:15:42.430 "data_size": 63488 00:15:42.430 } 00:15:42.430 ] 00:15:42.430 }' 00:15:42.430 13:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:42.430 13:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.996 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.996 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:43.254 [2024-07-26 13:15:23.749904] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.254 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.514 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.514 "name": "Existed_Raid", 00:15:43.514 "uuid": "46c8e153-956e-4e91-ac11-a30df7dce384", 00:15:43.514 "strip_size_kb": 64, 00:15:43.514 "state": "configuring", 00:15:43.514 "raid_level": "concat", 00:15:43.514 "superblock": true, 00:15:43.514 "num_base_bdevs": 3, 00:15:43.514 "num_base_bdevs_discovered": 1, 00:15:43.514 "num_base_bdevs_operational": 3, 00:15:43.514 "base_bdevs_list": [ 00:15:43.514 { 00:15:43.514 "name": "BaseBdev1", 00:15:43.514 "uuid": "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890", 00:15:43.514 "is_configured": true, 00:15:43.514 "data_offset": 2048, 00:15:43.514 "data_size": 63488 00:15:43.514 }, 00:15:43.514 { 00:15:43.514 "name": null, 00:15:43.514 "uuid": "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225", 00:15:43.514 "is_configured": false, 00:15:43.514 "data_offset": 2048, 00:15:43.514 "data_size": 63488 00:15:43.514 }, 00:15:43.514 { 00:15:43.514 "name": null, 00:15:43.514 "uuid": "f21bbb66-09df-41ab-b851-c8c5e1c9161b", 00:15:43.514 "is_configured": false, 00:15:43.514 "data_offset": 2048, 00:15:43.514 "data_size": 63488 00:15:43.514 } 00:15:43.514 ] 00:15:43.514 }' 00:15:43.514 13:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.514 13:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.081 13:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:44.081 13:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.340 13:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:44.340 13:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:44.908 [2024-07-26 13:15:25.265922] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.908 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:44.908 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:44.908 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.908 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:44.908 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:44.908 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:44.908 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.908 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.908 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.908 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.908 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.908 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.187 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.187 "name": "Existed_Raid", 00:15:45.187 "uuid": "46c8e153-956e-4e91-ac11-a30df7dce384", 00:15:45.187 "strip_size_kb": 64, 00:15:45.188 "state": "configuring", 00:15:45.188 "raid_level": "concat", 00:15:45.188 "superblock": true, 00:15:45.188 "num_base_bdevs": 3, 00:15:45.188 "num_base_bdevs_discovered": 2, 00:15:45.188 "num_base_bdevs_operational": 3, 00:15:45.188 "base_bdevs_list": [ 00:15:45.188 { 00:15:45.188 "name": "BaseBdev1", 00:15:45.188 "uuid": "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890", 00:15:45.188 "is_configured": true, 00:15:45.188 "data_offset": 2048, 00:15:45.188 "data_size": 63488 00:15:45.188 }, 00:15:45.188 { 00:15:45.188 "name": null, 00:15:45.188 "uuid": "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225", 00:15:45.188 "is_configured": false, 00:15:45.188 "data_offset": 2048, 00:15:45.188 "data_size": 63488 00:15:45.188 }, 00:15:45.188 { 00:15:45.188 "name": "BaseBdev3", 00:15:45.188 "uuid": "f21bbb66-09df-41ab-b851-c8c5e1c9161b", 00:15:45.188 "is_configured": true, 00:15:45.188 "data_offset": 2048, 00:15:45.188 "data_size": 63488 00:15:45.188 } 00:15:45.188 ] 00:15:45.188 }' 00:15:45.188 13:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.188 13:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.773 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.773 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:46.032 [2024-07-26 13:15:26.525258] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.032 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.033 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.292 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.292 "name": "Existed_Raid", 00:15:46.292 "uuid": "46c8e153-956e-4e91-ac11-a30df7dce384", 00:15:46.292 "strip_size_kb": 64, 00:15:46.292 "state": "configuring", 00:15:46.292 "raid_level": "concat", 00:15:46.292 "superblock": true, 00:15:46.292 "num_base_bdevs": 3, 00:15:46.292 "num_base_bdevs_discovered": 1, 00:15:46.292 "num_base_bdevs_operational": 3, 00:15:46.292 "base_bdevs_list": [ 00:15:46.292 { 00:15:46.292 "name": null, 00:15:46.292 "uuid": "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890", 00:15:46.292 "is_configured": false, 00:15:46.292 "data_offset": 2048, 00:15:46.292 "data_size": 63488 00:15:46.292 }, 00:15:46.292 { 00:15:46.292 "name": null, 00:15:46.292 "uuid": "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225", 00:15:46.292 "is_configured": false, 00:15:46.292 "data_offset": 2048, 00:15:46.292 "data_size": 63488 00:15:46.292 }, 00:15:46.292 { 00:15:46.292 "name": "BaseBdev3", 00:15:46.292 "uuid": "f21bbb66-09df-41ab-b851-c8c5e1c9161b", 00:15:46.292 "is_configured": true, 00:15:46.292 "data_offset": 2048, 00:15:46.292 "data_size": 63488 00:15:46.292 } 00:15:46.292 ] 00:15:46.292 }' 00:15:46.292 13:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.292 13:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.860 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.860 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:47.119 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:47.119 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:47.378 [2024-07-26 13:15:27.802735] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.378 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:47.378 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:47.378 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:47.378 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:47.378 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:47.378 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:47.378 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.378 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.378 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.378 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.378 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.378 13:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.637 13:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.637 "name": "Existed_Raid", 00:15:47.637 "uuid": "46c8e153-956e-4e91-ac11-a30df7dce384", 00:15:47.637 "strip_size_kb": 64, 00:15:47.637 "state": "configuring", 00:15:47.638 "raid_level": "concat", 00:15:47.638 "superblock": true, 00:15:47.638 "num_base_bdevs": 3, 00:15:47.638 "num_base_bdevs_discovered": 2, 00:15:47.638 "num_base_bdevs_operational": 3, 00:15:47.638 "base_bdevs_list": [ 00:15:47.638 { 00:15:47.638 "name": null, 00:15:47.638 "uuid": "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890", 00:15:47.638 "is_configured": false, 00:15:47.638 "data_offset": 2048, 00:15:47.638 "data_size": 63488 00:15:47.638 }, 00:15:47.638 { 00:15:47.638 "name": "BaseBdev2", 00:15:47.638 "uuid": "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225", 00:15:47.638 "is_configured": true, 00:15:47.638 "data_offset": 2048, 00:15:47.638 "data_size": 63488 00:15:47.638 }, 00:15:47.638 { 00:15:47.638 "name": "BaseBdev3", 00:15:47.638 "uuid": "f21bbb66-09df-41ab-b851-c8c5e1c9161b", 00:15:47.638 "is_configured": true, 00:15:47.638 "data_offset": 2048, 00:15:47.638 "data_size": 63488 00:15:47.638 } 00:15:47.638 ] 00:15:47.638 }' 00:15:47.638 13:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.638 13:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.328 13:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.328 13:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:48.328 13:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:48.328 13:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.328 13:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:48.656 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 97dfbeef-c5cd-4b84-84aa-f3cfeff4a890 00:15:48.915 [2024-07-26 13:15:29.253775] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:48.915 [2024-07-26 13:15:29.253911] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x10f5040 00:15:48.915 [2024-07-26 13:15:29.253923] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:48.915 [2024-07-26 13:15:29.254095] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12a7e40 00:15:48.915 [2024-07-26 13:15:29.254206] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10f5040 00:15:48.915 [2024-07-26 13:15:29.254216] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10f5040 00:15:48.915 [2024-07-26 13:15:29.254300] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.915 NewBaseBdev 00:15:48.915 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:48.915 13:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:48.915 13:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:48.915 13:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:48.915 13:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:48.915 13:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:48.915 13:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:49.174 13:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:49.174 [ 00:15:49.174 { 00:15:49.174 "name": "NewBaseBdev", 00:15:49.174 "aliases": [ 00:15:49.174 "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890" 00:15:49.174 ], 00:15:49.174 "product_name": "Malloc disk", 00:15:49.174 "block_size": 512, 00:15:49.174 "num_blocks": 65536, 00:15:49.175 "uuid": "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890", 00:15:49.175 "assigned_rate_limits": { 00:15:49.175 "rw_ios_per_sec": 0, 00:15:49.175 "rw_mbytes_per_sec": 0, 00:15:49.175 "r_mbytes_per_sec": 0, 00:15:49.175 "w_mbytes_per_sec": 0 00:15:49.175 }, 00:15:49.175 "claimed": true, 00:15:49.175 "claim_type": "exclusive_write", 00:15:49.175 "zoned": false, 00:15:49.175 "supported_io_types": { 00:15:49.175 "read": true, 00:15:49.175 "write": true, 00:15:49.175 "unmap": true, 00:15:49.175 "flush": true, 00:15:49.175 "reset": true, 00:15:49.175 "nvme_admin": false, 00:15:49.175 "nvme_io": false, 00:15:49.175 "nvme_io_md": false, 00:15:49.175 "write_zeroes": true, 00:15:49.175 "zcopy": true, 00:15:49.175 "get_zone_info": false, 00:15:49.175 "zone_management": false, 00:15:49.175 "zone_append": false, 00:15:49.175 "compare": false, 00:15:49.175 "compare_and_write": false, 00:15:49.175 "abort": true, 00:15:49.175 "seek_hole": false, 00:15:49.175 "seek_data": false, 00:15:49.175 "copy": true, 00:15:49.175 "nvme_iov_md": false 00:15:49.175 }, 00:15:49.175 "memory_domains": [ 00:15:49.175 { 00:15:49.175 "dma_device_id": "system", 00:15:49.175 "dma_device_type": 1 00:15:49.175 }, 00:15:49.175 { 00:15:49.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.175 "dma_device_type": 2 00:15:49.175 } 00:15:49.175 ], 00:15:49.175 "driver_specific": {} 00:15:49.175 } 00:15:49.175 ] 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.175 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.434 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.434 "name": "Existed_Raid", 00:15:49.434 "uuid": "46c8e153-956e-4e91-ac11-a30df7dce384", 00:15:49.434 "strip_size_kb": 64, 00:15:49.434 "state": "online", 00:15:49.434 "raid_level": "concat", 00:15:49.434 "superblock": true, 00:15:49.434 "num_base_bdevs": 3, 00:15:49.434 "num_base_bdevs_discovered": 3, 00:15:49.434 "num_base_bdevs_operational": 3, 00:15:49.434 "base_bdevs_list": [ 00:15:49.434 { 00:15:49.434 "name": "NewBaseBdev", 00:15:49.434 "uuid": "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890", 00:15:49.434 "is_configured": true, 00:15:49.434 "data_offset": 2048, 00:15:49.434 "data_size": 63488 00:15:49.434 }, 00:15:49.434 { 00:15:49.434 "name": "BaseBdev2", 00:15:49.434 "uuid": "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225", 00:15:49.434 "is_configured": true, 00:15:49.434 "data_offset": 2048, 00:15:49.434 "data_size": 63488 00:15:49.434 }, 00:15:49.434 { 00:15:49.434 "name": "BaseBdev3", 00:15:49.434 "uuid": "f21bbb66-09df-41ab-b851-c8c5e1c9161b", 00:15:49.434 "is_configured": true, 00:15:49.434 "data_offset": 2048, 00:15:49.434 "data_size": 63488 00:15:49.434 } 00:15:49.434 ] 00:15:49.434 }' 00:15:49.434 13:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.434 13:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.001 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:50.001 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:50.001 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:50.001 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:50.001 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:50.001 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:50.001 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:50.001 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:50.260 [2024-07-26 13:15:30.718003] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.260 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:50.260 "name": "Existed_Raid", 00:15:50.260 "aliases": [ 00:15:50.260 "46c8e153-956e-4e91-ac11-a30df7dce384" 00:15:50.260 ], 00:15:50.260 "product_name": "Raid Volume", 00:15:50.260 "block_size": 512, 00:15:50.260 "num_blocks": 190464, 00:15:50.260 "uuid": "46c8e153-956e-4e91-ac11-a30df7dce384", 00:15:50.260 "assigned_rate_limits": { 00:15:50.260 "rw_ios_per_sec": 0, 00:15:50.260 "rw_mbytes_per_sec": 0, 00:15:50.260 "r_mbytes_per_sec": 0, 00:15:50.260 "w_mbytes_per_sec": 0 00:15:50.260 }, 00:15:50.260 "claimed": false, 00:15:50.260 "zoned": false, 00:15:50.260 "supported_io_types": { 00:15:50.260 "read": true, 00:15:50.260 "write": true, 00:15:50.260 "unmap": true, 00:15:50.260 "flush": true, 00:15:50.260 "reset": true, 00:15:50.260 "nvme_admin": false, 00:15:50.260 "nvme_io": false, 00:15:50.260 "nvme_io_md": false, 00:15:50.260 "write_zeroes": true, 00:15:50.260 "zcopy": false, 00:15:50.260 "get_zone_info": false, 00:15:50.260 "zone_management": false, 00:15:50.260 "zone_append": false, 00:15:50.260 "compare": false, 00:15:50.260 "compare_and_write": false, 00:15:50.260 "abort": false, 00:15:50.260 "seek_hole": false, 00:15:50.260 "seek_data": false, 00:15:50.260 "copy": false, 00:15:50.260 "nvme_iov_md": false 00:15:50.260 }, 00:15:50.260 "memory_domains": [ 00:15:50.260 { 00:15:50.260 "dma_device_id": "system", 00:15:50.260 "dma_device_type": 1 00:15:50.260 }, 00:15:50.260 { 00:15:50.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.260 "dma_device_type": 2 00:15:50.260 }, 00:15:50.260 { 00:15:50.260 "dma_device_id": "system", 00:15:50.260 "dma_device_type": 1 00:15:50.260 }, 00:15:50.260 { 00:15:50.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.260 "dma_device_type": 2 00:15:50.260 }, 00:15:50.260 { 00:15:50.260 "dma_device_id": "system", 00:15:50.260 "dma_device_type": 1 00:15:50.260 }, 00:15:50.260 { 00:15:50.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.260 "dma_device_type": 2 00:15:50.260 } 00:15:50.260 ], 00:15:50.260 "driver_specific": { 00:15:50.260 "raid": { 00:15:50.260 "uuid": "46c8e153-956e-4e91-ac11-a30df7dce384", 00:15:50.260 "strip_size_kb": 64, 00:15:50.260 "state": "online", 00:15:50.260 "raid_level": "concat", 00:15:50.260 "superblock": true, 00:15:50.260 "num_base_bdevs": 3, 00:15:50.260 "num_base_bdevs_discovered": 3, 00:15:50.260 "num_base_bdevs_operational": 3, 00:15:50.260 "base_bdevs_list": [ 00:15:50.260 { 00:15:50.260 "name": "NewBaseBdev", 00:15:50.260 "uuid": "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890", 00:15:50.260 "is_configured": true, 00:15:50.260 "data_offset": 2048, 00:15:50.260 "data_size": 63488 00:15:50.260 }, 00:15:50.260 { 00:15:50.260 "name": "BaseBdev2", 00:15:50.260 "uuid": "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225", 00:15:50.260 "is_configured": true, 00:15:50.260 "data_offset": 2048, 00:15:50.260 "data_size": 63488 00:15:50.260 }, 00:15:50.260 { 00:15:50.260 "name": "BaseBdev3", 00:15:50.260 "uuid": "f21bbb66-09df-41ab-b851-c8c5e1c9161b", 00:15:50.260 "is_configured": true, 00:15:50.260 "data_offset": 2048, 00:15:50.260 "data_size": 63488 00:15:50.260 } 00:15:50.260 ] 00:15:50.260 } 00:15:50.260 } 00:15:50.260 }' 00:15:50.260 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.260 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:50.260 BaseBdev2 00:15:50.260 BaseBdev3' 00:15:50.260 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:50.260 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:50.260 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:50.519 13:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:50.519 "name": "NewBaseBdev", 00:15:50.519 "aliases": [ 00:15:50.519 "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890" 00:15:50.519 ], 00:15:50.519 "product_name": "Malloc disk", 00:15:50.519 "block_size": 512, 00:15:50.519 "num_blocks": 65536, 00:15:50.519 "uuid": "97dfbeef-c5cd-4b84-84aa-f3cfeff4a890", 00:15:50.519 "assigned_rate_limits": { 00:15:50.519 "rw_ios_per_sec": 0, 00:15:50.519 "rw_mbytes_per_sec": 0, 00:15:50.519 "r_mbytes_per_sec": 0, 00:15:50.519 "w_mbytes_per_sec": 0 00:15:50.519 }, 00:15:50.519 "claimed": true, 00:15:50.519 "claim_type": "exclusive_write", 00:15:50.519 "zoned": false, 00:15:50.519 "supported_io_types": { 00:15:50.519 "read": true, 00:15:50.519 "write": true, 00:15:50.519 "unmap": true, 00:15:50.519 "flush": true, 00:15:50.519 "reset": true, 00:15:50.519 "nvme_admin": false, 00:15:50.519 "nvme_io": false, 00:15:50.519 "nvme_io_md": false, 00:15:50.519 "write_zeroes": true, 00:15:50.519 "zcopy": true, 00:15:50.519 "get_zone_info": false, 00:15:50.519 "zone_management": false, 00:15:50.519 "zone_append": false, 00:15:50.519 "compare": false, 00:15:50.519 "compare_and_write": false, 00:15:50.519 "abort": true, 00:15:50.519 "seek_hole": false, 00:15:50.519 "seek_data": false, 00:15:50.519 "copy": true, 00:15:50.519 "nvme_iov_md": false 00:15:50.519 }, 00:15:50.519 "memory_domains": [ 00:15:50.519 { 00:15:50.519 "dma_device_id": "system", 00:15:50.519 "dma_device_type": 1 00:15:50.519 }, 00:15:50.519 { 00:15:50.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.519 "dma_device_type": 2 00:15:50.519 } 00:15:50.519 ], 00:15:50.519 "driver_specific": {} 00:15:50.519 }' 00:15:50.519 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.519 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.778 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:50.778 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.778 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.778 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:50.778 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.778 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.778 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:50.778 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.778 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.037 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.037 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.037 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:51.037 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.296 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.296 "name": "BaseBdev2", 00:15:51.296 "aliases": [ 00:15:51.296 "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225" 00:15:51.296 ], 00:15:51.296 "product_name": "Malloc disk", 00:15:51.296 "block_size": 512, 00:15:51.296 "num_blocks": 65536, 00:15:51.296 "uuid": "c76c60c4-07f9-4fa7-9deb-14cbe0dc8225", 00:15:51.296 "assigned_rate_limits": { 00:15:51.296 "rw_ios_per_sec": 0, 00:15:51.296 "rw_mbytes_per_sec": 0, 00:15:51.296 "r_mbytes_per_sec": 0, 00:15:51.296 "w_mbytes_per_sec": 0 00:15:51.296 }, 00:15:51.296 "claimed": true, 00:15:51.296 "claim_type": "exclusive_write", 00:15:51.296 "zoned": false, 00:15:51.296 "supported_io_types": { 00:15:51.296 "read": true, 00:15:51.296 "write": true, 00:15:51.296 "unmap": true, 00:15:51.296 "flush": true, 00:15:51.296 "reset": true, 00:15:51.296 "nvme_admin": false, 00:15:51.296 "nvme_io": false, 00:15:51.296 "nvme_io_md": false, 00:15:51.296 "write_zeroes": true, 00:15:51.296 "zcopy": true, 00:15:51.296 "get_zone_info": false, 00:15:51.296 "zone_management": false, 00:15:51.296 "zone_append": false, 00:15:51.296 "compare": false, 00:15:51.296 "compare_and_write": false, 00:15:51.296 "abort": true, 00:15:51.296 "seek_hole": false, 00:15:51.296 "seek_data": false, 00:15:51.296 "copy": true, 00:15:51.296 "nvme_iov_md": false 00:15:51.296 }, 00:15:51.296 "memory_domains": [ 00:15:51.296 { 00:15:51.296 "dma_device_id": "system", 00:15:51.296 "dma_device_type": 1 00:15:51.296 }, 00:15:51.296 { 00:15:51.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.296 "dma_device_type": 2 00:15:51.296 } 00:15:51.296 ], 00:15:51.296 "driver_specific": {} 00:15:51.296 }' 00:15:51.296 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.296 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.296 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.296 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.296 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.296 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.296 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.296 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.555 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.555 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.555 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.555 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.555 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:51.555 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:51.555 13:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.814 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.814 "name": "BaseBdev3", 00:15:51.814 "aliases": [ 00:15:51.814 "f21bbb66-09df-41ab-b851-c8c5e1c9161b" 00:15:51.814 ], 00:15:51.814 "product_name": "Malloc disk", 00:15:51.814 "block_size": 512, 00:15:51.814 "num_blocks": 65536, 00:15:51.814 "uuid": "f21bbb66-09df-41ab-b851-c8c5e1c9161b", 00:15:51.814 "assigned_rate_limits": { 00:15:51.814 "rw_ios_per_sec": 0, 00:15:51.814 "rw_mbytes_per_sec": 0, 00:15:51.814 "r_mbytes_per_sec": 0, 00:15:51.814 "w_mbytes_per_sec": 0 00:15:51.814 }, 00:15:51.814 "claimed": true, 00:15:51.814 "claim_type": "exclusive_write", 00:15:51.814 "zoned": false, 00:15:51.814 "supported_io_types": { 00:15:51.814 "read": true, 00:15:51.814 "write": true, 00:15:51.814 "unmap": true, 00:15:51.814 "flush": true, 00:15:51.814 "reset": true, 00:15:51.814 "nvme_admin": false, 00:15:51.814 "nvme_io": false, 00:15:51.814 "nvme_io_md": false, 00:15:51.814 "write_zeroes": true, 00:15:51.814 "zcopy": true, 00:15:51.814 "get_zone_info": false, 00:15:51.814 "zone_management": false, 00:15:51.814 "zone_append": false, 00:15:51.814 "compare": false, 00:15:51.814 "compare_and_write": false, 00:15:51.814 "abort": true, 00:15:51.814 "seek_hole": false, 00:15:51.814 "seek_data": false, 00:15:51.814 "copy": true, 00:15:51.814 "nvme_iov_md": false 00:15:51.814 }, 00:15:51.814 "memory_domains": [ 00:15:51.814 { 00:15:51.814 "dma_device_id": "system", 00:15:51.814 "dma_device_type": 1 00:15:51.814 }, 00:15:51.814 { 00:15:51.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.814 "dma_device_type": 2 00:15:51.814 } 00:15:51.814 ], 00:15:51.814 "driver_specific": {} 00:15:51.814 }' 00:15:51.814 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.814 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.814 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.814 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.814 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.814 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.814 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.073 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.073 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:52.073 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.073 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:52.073 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:52.073 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:52.332 [2024-07-26 13:15:32.678930] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.332 [2024-07-26 13:15:32.678953] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.332 [2024-07-26 13:15:32.679000] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.332 [2024-07-26 13:15:32.679047] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.332 [2024-07-26 13:15:32.679058] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10f5040 name Existed_Raid, state offline 00:15:52.332 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 700018 00:15:52.332 13:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 700018 ']' 00:15:52.332 13:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 700018 00:15:52.332 13:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:52.332 13:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.332 13:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 700018 00:15:52.332 13:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.332 13:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.332 13:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 700018' 00:15:52.332 killing process with pid 700018 00:15:52.332 13:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 700018 00:15:52.332 [2024-07-26 13:15:32.755719] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.332 13:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 700018 00:15:52.332 [2024-07-26 13:15:32.778744] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.591 13:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:52.591 00:15:52.591 real 0m26.655s 00:15:52.592 user 0m49.032s 00:15:52.592 sys 0m4.710s 00:15:52.592 13:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.592 13:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.592 ************************************ 00:15:52.592 END TEST raid_state_function_test_sb 00:15:52.592 ************************************ 00:15:52.592 13:15:33 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:15:52.592 13:15:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:52.592 13:15:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.592 13:15:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.592 ************************************ 00:15:52.592 START TEST raid_superblock_test 00:15:52.592 ************************************ 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=705111 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 705111 /var/tmp/spdk-raid.sock 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 705111 ']' 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:52.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.592 13:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.852 [2024-07-26 13:15:33.120042] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:15:52.852 [2024-07-26 13:15:33.120114] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid705111 ] 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:52.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:52.852 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:52.852 [2024-07-26 13:15:33.252185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.852 [2024-07-26 13:15:33.339445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.111 [2024-07-26 13:15:33.402178] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.111 [2024-07-26 13:15:33.402211] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:53.678 13:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:53.678 13:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:53.678 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:15:53.678 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:53.678 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:15:53.678 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:15:53.678 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:53.678 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:53.678 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:53.678 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:53.678 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:53.937 malloc1 00:15:53.937 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:53.937 [2024-07-26 13:15:34.463392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:53.937 [2024-07-26 13:15:34.463434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.937 [2024-07-26 13:15:34.463452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18132f0 00:15:53.937 [2024-07-26 13:15:34.463464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.196 [2024-07-26 13:15:34.464972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.196 [2024-07-26 13:15:34.464999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.196 pt1 00:15:54.196 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:54.196 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:54.196 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:15:54.196 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:15:54.196 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:54.196 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.196 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.196 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.196 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:54.196 malloc2 00:15:54.196 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.455 [2024-07-26 13:15:34.921119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.455 [2024-07-26 13:15:34.921167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.455 [2024-07-26 13:15:34.921183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18146d0 00:15:54.455 [2024-07-26 13:15:34.921195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.455 [2024-07-26 13:15:34.922634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.455 [2024-07-26 13:15:34.922662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.455 pt2 00:15:54.455 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:54.455 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:54.455 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:15:54.455 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:15:54.455 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:54.455 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.455 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.455 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.455 13:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:54.714 malloc3 00:15:54.714 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:54.973 [2024-07-26 13:15:35.358479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:54.973 [2024-07-26 13:15:35.358517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.973 [2024-07-26 13:15:35.358531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19ad6b0 00:15:54.973 [2024-07-26 13:15:35.358542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.973 [2024-07-26 13:15:35.359841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.973 [2024-07-26 13:15:35.359868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:54.973 pt3 00:15:54.973 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:54.973 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:54.973 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:55.233 [2024-07-26 13:15:35.587096] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.233 [2024-07-26 13:15:35.588274] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.233 [2024-07-26 13:15:35.588325] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:55.233 [2024-07-26 13:15:35.588446] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x19adcb0 00:15:55.233 [2024-07-26 13:15:35.588456] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:55.233 [2024-07-26 13:15:35.588638] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19ad5a0 00:15:55.233 [2024-07-26 13:15:35.588764] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19adcb0 00:15:55.233 [2024-07-26 13:15:35.588773] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19adcb0 00:15:55.233 [2024-07-26 13:15:35.588870] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.233 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:15:55.233 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:55.233 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:55.233 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:55.233 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:55.233 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:55.233 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.233 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.233 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.233 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.233 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.233 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.492 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.492 "name": "raid_bdev1", 00:15:55.492 "uuid": "ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3", 00:15:55.492 "strip_size_kb": 64, 00:15:55.492 "state": "online", 00:15:55.492 "raid_level": "concat", 00:15:55.492 "superblock": true, 00:15:55.492 "num_base_bdevs": 3, 00:15:55.492 "num_base_bdevs_discovered": 3, 00:15:55.492 "num_base_bdevs_operational": 3, 00:15:55.492 "base_bdevs_list": [ 00:15:55.492 { 00:15:55.492 "name": "pt1", 00:15:55.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:55.492 "is_configured": true, 00:15:55.492 "data_offset": 2048, 00:15:55.492 "data_size": 63488 00:15:55.492 }, 00:15:55.492 { 00:15:55.492 "name": "pt2", 00:15:55.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:55.492 "is_configured": true, 00:15:55.492 "data_offset": 2048, 00:15:55.492 "data_size": 63488 00:15:55.492 }, 00:15:55.492 { 00:15:55.492 "name": "pt3", 00:15:55.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:55.492 "is_configured": true, 00:15:55.492 "data_offset": 2048, 00:15:55.492 "data_size": 63488 00:15:55.492 } 00:15:55.492 ] 00:15:55.492 }' 00:15:55.492 13:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.492 13:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.060 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:15:56.060 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:56.060 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:56.060 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:56.060 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:56.060 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:56.060 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:56.060 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:56.319 [2024-07-26 13:15:36.590156] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.319 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:56.319 "name": "raid_bdev1", 00:15:56.319 "aliases": [ 00:15:56.319 "ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3" 00:15:56.319 ], 00:15:56.319 "product_name": "Raid Volume", 00:15:56.319 "block_size": 512, 00:15:56.319 "num_blocks": 190464, 00:15:56.319 "uuid": "ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3", 00:15:56.319 "assigned_rate_limits": { 00:15:56.319 "rw_ios_per_sec": 0, 00:15:56.319 "rw_mbytes_per_sec": 0, 00:15:56.319 "r_mbytes_per_sec": 0, 00:15:56.319 "w_mbytes_per_sec": 0 00:15:56.319 }, 00:15:56.319 "claimed": false, 00:15:56.319 "zoned": false, 00:15:56.319 "supported_io_types": { 00:15:56.319 "read": true, 00:15:56.319 "write": true, 00:15:56.319 "unmap": true, 00:15:56.319 "flush": true, 00:15:56.319 "reset": true, 00:15:56.319 "nvme_admin": false, 00:15:56.319 "nvme_io": false, 00:15:56.319 "nvme_io_md": false, 00:15:56.319 "write_zeroes": true, 00:15:56.319 "zcopy": false, 00:15:56.319 "get_zone_info": false, 00:15:56.319 "zone_management": false, 00:15:56.319 "zone_append": false, 00:15:56.319 "compare": false, 00:15:56.319 "compare_and_write": false, 00:15:56.319 "abort": false, 00:15:56.319 "seek_hole": false, 00:15:56.319 "seek_data": false, 00:15:56.319 "copy": false, 00:15:56.319 "nvme_iov_md": false 00:15:56.319 }, 00:15:56.319 "memory_domains": [ 00:15:56.319 { 00:15:56.319 "dma_device_id": "system", 00:15:56.319 "dma_device_type": 1 00:15:56.319 }, 00:15:56.319 { 00:15:56.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.319 "dma_device_type": 2 00:15:56.319 }, 00:15:56.319 { 00:15:56.319 "dma_device_id": "system", 00:15:56.319 "dma_device_type": 1 00:15:56.319 }, 00:15:56.319 { 00:15:56.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.319 "dma_device_type": 2 00:15:56.319 }, 00:15:56.319 { 00:15:56.319 "dma_device_id": "system", 00:15:56.319 "dma_device_type": 1 00:15:56.319 }, 00:15:56.319 { 00:15:56.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.319 "dma_device_type": 2 00:15:56.319 } 00:15:56.319 ], 00:15:56.319 "driver_specific": { 00:15:56.319 "raid": { 00:15:56.319 "uuid": "ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3", 00:15:56.319 "strip_size_kb": 64, 00:15:56.319 "state": "online", 00:15:56.319 "raid_level": "concat", 00:15:56.319 "superblock": true, 00:15:56.319 "num_base_bdevs": 3, 00:15:56.319 "num_base_bdevs_discovered": 3, 00:15:56.319 "num_base_bdevs_operational": 3, 00:15:56.319 "base_bdevs_list": [ 00:15:56.319 { 00:15:56.319 "name": "pt1", 00:15:56.319 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.319 "is_configured": true, 00:15:56.319 "data_offset": 2048, 00:15:56.319 "data_size": 63488 00:15:56.319 }, 00:15:56.319 { 00:15:56.319 "name": "pt2", 00:15:56.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:56.319 "is_configured": true, 00:15:56.319 "data_offset": 2048, 00:15:56.319 "data_size": 63488 00:15:56.319 }, 00:15:56.319 { 00:15:56.319 "name": "pt3", 00:15:56.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:56.319 "is_configured": true, 00:15:56.319 "data_offset": 2048, 00:15:56.319 "data_size": 63488 00:15:56.319 } 00:15:56.319 ] 00:15:56.319 } 00:15:56.319 } 00:15:56.319 }' 00:15:56.319 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.319 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:56.319 pt2 00:15:56.319 pt3' 00:15:56.319 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.319 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:56.319 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.578 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:56.578 "name": "pt1", 00:15:56.578 "aliases": [ 00:15:56.578 "00000000-0000-0000-0000-000000000001" 00:15:56.578 ], 00:15:56.578 "product_name": "passthru", 00:15:56.578 "block_size": 512, 00:15:56.578 "num_blocks": 65536, 00:15:56.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:56.578 "assigned_rate_limits": { 00:15:56.578 "rw_ios_per_sec": 0, 00:15:56.578 "rw_mbytes_per_sec": 0, 00:15:56.578 "r_mbytes_per_sec": 0, 00:15:56.578 "w_mbytes_per_sec": 0 00:15:56.578 }, 00:15:56.578 "claimed": true, 00:15:56.578 "claim_type": "exclusive_write", 00:15:56.578 "zoned": false, 00:15:56.578 "supported_io_types": { 00:15:56.578 "read": true, 00:15:56.578 "write": true, 00:15:56.578 "unmap": true, 00:15:56.578 "flush": true, 00:15:56.578 "reset": true, 00:15:56.578 "nvme_admin": false, 00:15:56.578 "nvme_io": false, 00:15:56.578 "nvme_io_md": false, 00:15:56.578 "write_zeroes": true, 00:15:56.578 "zcopy": true, 00:15:56.578 "get_zone_info": false, 00:15:56.578 "zone_management": false, 00:15:56.578 "zone_append": false, 00:15:56.578 "compare": false, 00:15:56.578 "compare_and_write": false, 00:15:56.578 "abort": true, 00:15:56.578 "seek_hole": false, 00:15:56.578 "seek_data": false, 00:15:56.578 "copy": true, 00:15:56.578 "nvme_iov_md": false 00:15:56.578 }, 00:15:56.578 "memory_domains": [ 00:15:56.578 { 00:15:56.578 "dma_device_id": "system", 00:15:56.578 "dma_device_type": 1 00:15:56.578 }, 00:15:56.578 { 00:15:56.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.578 "dma_device_type": 2 00:15:56.578 } 00:15:56.578 ], 00:15:56.578 "driver_specific": { 00:15:56.578 "passthru": { 00:15:56.578 "name": "pt1", 00:15:56.578 "base_bdev_name": "malloc1" 00:15:56.578 } 00:15:56.578 } 00:15:56.578 }' 00:15:56.578 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.578 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.578 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:56.578 13:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.578 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.578 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:56.578 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.578 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.837 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:56.837 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.837 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.837 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:56.837 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.837 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.837 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:57.096 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.096 "name": "pt2", 00:15:57.096 "aliases": [ 00:15:57.096 "00000000-0000-0000-0000-000000000002" 00:15:57.096 ], 00:15:57.096 "product_name": "passthru", 00:15:57.096 "block_size": 512, 00:15:57.096 "num_blocks": 65536, 00:15:57.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.096 "assigned_rate_limits": { 00:15:57.096 "rw_ios_per_sec": 0, 00:15:57.096 "rw_mbytes_per_sec": 0, 00:15:57.096 "r_mbytes_per_sec": 0, 00:15:57.096 "w_mbytes_per_sec": 0 00:15:57.096 }, 00:15:57.096 "claimed": true, 00:15:57.096 "claim_type": "exclusive_write", 00:15:57.096 "zoned": false, 00:15:57.096 "supported_io_types": { 00:15:57.096 "read": true, 00:15:57.096 "write": true, 00:15:57.096 "unmap": true, 00:15:57.096 "flush": true, 00:15:57.096 "reset": true, 00:15:57.096 "nvme_admin": false, 00:15:57.096 "nvme_io": false, 00:15:57.096 "nvme_io_md": false, 00:15:57.096 "write_zeroes": true, 00:15:57.096 "zcopy": true, 00:15:57.096 "get_zone_info": false, 00:15:57.096 "zone_management": false, 00:15:57.096 "zone_append": false, 00:15:57.096 "compare": false, 00:15:57.096 "compare_and_write": false, 00:15:57.097 "abort": true, 00:15:57.097 "seek_hole": false, 00:15:57.097 "seek_data": false, 00:15:57.097 "copy": true, 00:15:57.097 "nvme_iov_md": false 00:15:57.097 }, 00:15:57.097 "memory_domains": [ 00:15:57.097 { 00:15:57.097 "dma_device_id": "system", 00:15:57.097 "dma_device_type": 1 00:15:57.097 }, 00:15:57.097 { 00:15:57.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.097 "dma_device_type": 2 00:15:57.097 } 00:15:57.097 ], 00:15:57.097 "driver_specific": { 00:15:57.097 "passthru": { 00:15:57.097 "name": "pt2", 00:15:57.097 "base_bdev_name": "malloc2" 00:15:57.097 } 00:15:57.097 } 00:15:57.097 }' 00:15:57.097 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.097 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.097 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.097 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.097 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.097 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.097 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.355 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.355 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.355 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.355 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.355 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.355 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.355 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:57.355 13:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.614 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.614 "name": "pt3", 00:15:57.614 "aliases": [ 00:15:57.614 "00000000-0000-0000-0000-000000000003" 00:15:57.614 ], 00:15:57.614 "product_name": "passthru", 00:15:57.614 "block_size": 512, 00:15:57.614 "num_blocks": 65536, 00:15:57.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:57.614 "assigned_rate_limits": { 00:15:57.614 "rw_ios_per_sec": 0, 00:15:57.614 "rw_mbytes_per_sec": 0, 00:15:57.614 "r_mbytes_per_sec": 0, 00:15:57.614 "w_mbytes_per_sec": 0 00:15:57.614 }, 00:15:57.614 "claimed": true, 00:15:57.614 "claim_type": "exclusive_write", 00:15:57.614 "zoned": false, 00:15:57.614 "supported_io_types": { 00:15:57.614 "read": true, 00:15:57.614 "write": true, 00:15:57.614 "unmap": true, 00:15:57.614 "flush": true, 00:15:57.614 "reset": true, 00:15:57.614 "nvme_admin": false, 00:15:57.614 "nvme_io": false, 00:15:57.614 "nvme_io_md": false, 00:15:57.614 "write_zeroes": true, 00:15:57.614 "zcopy": true, 00:15:57.614 "get_zone_info": false, 00:15:57.614 "zone_management": false, 00:15:57.614 "zone_append": false, 00:15:57.614 "compare": false, 00:15:57.614 "compare_and_write": false, 00:15:57.614 "abort": true, 00:15:57.614 "seek_hole": false, 00:15:57.614 "seek_data": false, 00:15:57.614 "copy": true, 00:15:57.614 "nvme_iov_md": false 00:15:57.614 }, 00:15:57.614 "memory_domains": [ 00:15:57.614 { 00:15:57.614 "dma_device_id": "system", 00:15:57.614 "dma_device_type": 1 00:15:57.614 }, 00:15:57.614 { 00:15:57.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.614 "dma_device_type": 2 00:15:57.614 } 00:15:57.614 ], 00:15:57.614 "driver_specific": { 00:15:57.614 "passthru": { 00:15:57.614 "name": "pt3", 00:15:57.614 "base_bdev_name": "malloc3" 00:15:57.614 } 00:15:57.614 } 00:15:57.614 }' 00:15:57.614 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.614 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.614 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.614 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.614 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.874 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.874 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.874 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.874 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.874 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.874 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.874 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.874 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:57.874 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:15:58.133 [2024-07-26 13:15:38.563381] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.133 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3 00:15:58.133 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3 ']' 00:15:58.133 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:58.392 [2024-07-26 13:15:38.787696] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.392 [2024-07-26 13:15:38.787717] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.392 [2024-07-26 13:15:38.787765] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.392 [2024-07-26 13:15:38.787819] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.392 [2024-07-26 13:15:38.787830] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19adcb0 name raid_bdev1, state offline 00:15:58.392 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.392 13:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:15:58.651 13:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:15:58.651 13:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:15:58.651 13:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.651 13:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:58.910 13:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.910 13:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:59.476 13:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.476 13:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:59.476 13:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:59.476 13:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:59.734 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:15:59.734 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:59.734 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:59.734 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:59.734 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:59.734 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.735 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:59.735 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.735 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:59.735 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.735 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:59.735 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:59.735 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:59.993 [2024-07-26 13:15:40.400017] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:59.993 [2024-07-26 13:15:40.401266] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:59.993 [2024-07-26 13:15:40.401316] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:59.993 [2024-07-26 13:15:40.401358] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:59.993 [2024-07-26 13:15:40.401395] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:59.993 [2024-07-26 13:15:40.401416] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:59.993 [2024-07-26 13:15:40.401432] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.993 [2024-07-26 13:15:40.401442] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19adcb0 name raid_bdev1, state configuring 00:15:59.993 request: 00:15:59.993 { 00:15:59.993 "name": "raid_bdev1", 00:15:59.993 "raid_level": "concat", 00:15:59.993 "base_bdevs": [ 00:15:59.993 "malloc1", 00:15:59.993 "malloc2", 00:15:59.993 "malloc3" 00:15:59.993 ], 00:15:59.993 "strip_size_kb": 64, 00:15:59.993 "superblock": false, 00:15:59.993 "method": "bdev_raid_create", 00:15:59.993 "req_id": 1 00:15:59.993 } 00:15:59.993 Got JSON-RPC error response 00:15:59.993 response: 00:15:59.993 { 00:15:59.993 "code": -17, 00:15:59.993 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:59.993 } 00:15:59.993 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:59.993 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:59.993 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:59.993 13:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:59.993 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.993 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:00.252 [2024-07-26 13:15:40.752890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:00.252 [2024-07-26 13:15:40.752924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.252 [2024-07-26 13:15:40.752939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19aad00 00:16:00.252 [2024-07-26 13:15:40.752951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.252 [2024-07-26 13:15:40.754280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.252 [2024-07-26 13:15:40.754305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:00.252 [2024-07-26 13:15:40.754358] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:00.252 [2024-07-26 13:15:40.754381] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:00.252 pt1 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.252 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.511 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.511 13:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.511 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.511 "name": "raid_bdev1", 00:16:00.511 "uuid": "ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3", 00:16:00.511 "strip_size_kb": 64, 00:16:00.511 "state": "configuring", 00:16:00.511 "raid_level": "concat", 00:16:00.511 "superblock": true, 00:16:00.511 "num_base_bdevs": 3, 00:16:00.511 "num_base_bdevs_discovered": 1, 00:16:00.511 "num_base_bdevs_operational": 3, 00:16:00.511 "base_bdevs_list": [ 00:16:00.511 { 00:16:00.511 "name": "pt1", 00:16:00.511 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.511 "is_configured": true, 00:16:00.511 "data_offset": 2048, 00:16:00.511 "data_size": 63488 00:16:00.511 }, 00:16:00.511 { 00:16:00.511 "name": null, 00:16:00.511 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.511 "is_configured": false, 00:16:00.511 "data_offset": 2048, 00:16:00.511 "data_size": 63488 00:16:00.511 }, 00:16:00.511 { 00:16:00.511 "name": null, 00:16:00.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:00.511 "is_configured": false, 00:16:00.511 "data_offset": 2048, 00:16:00.511 "data_size": 63488 00:16:00.511 } 00:16:00.511 ] 00:16:00.511 }' 00:16:00.511 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.511 13:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.078 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:16:01.078 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:01.336 [2024-07-26 13:15:41.795643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:01.336 [2024-07-26 13:15:41.795688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.336 [2024-07-26 13:15:41.795705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19b6d50 00:16:01.336 [2024-07-26 13:15:41.795717] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.336 [2024-07-26 13:15:41.796028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.336 [2024-07-26 13:15:41.796045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:01.336 [2024-07-26 13:15:41.796099] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:01.336 [2024-07-26 13:15:41.796118] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.336 pt2 00:16:01.336 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:01.595 [2024-07-26 13:15:41.968112] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:01.595 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:01.595 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:01.595 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:01.595 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:01.595 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:01.595 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:01.595 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:01.595 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:01.595 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:01.595 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:01.595 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.595 13:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.854 13:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.854 "name": "raid_bdev1", 00:16:01.854 "uuid": "ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3", 00:16:01.854 "strip_size_kb": 64, 00:16:01.854 "state": "configuring", 00:16:01.854 "raid_level": "concat", 00:16:01.854 "superblock": true, 00:16:01.854 "num_base_bdevs": 3, 00:16:01.854 "num_base_bdevs_discovered": 1, 00:16:01.854 "num_base_bdevs_operational": 3, 00:16:01.854 "base_bdevs_list": [ 00:16:01.854 { 00:16:01.854 "name": "pt1", 00:16:01.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:01.854 "is_configured": true, 00:16:01.854 "data_offset": 2048, 00:16:01.854 "data_size": 63488 00:16:01.854 }, 00:16:01.854 { 00:16:01.854 "name": null, 00:16:01.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.854 "is_configured": false, 00:16:01.854 "data_offset": 2048, 00:16:01.854 "data_size": 63488 00:16:01.854 }, 00:16:01.854 { 00:16:01.854 "name": null, 00:16:01.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.854 "is_configured": false, 00:16:01.854 "data_offset": 2048, 00:16:01.854 "data_size": 63488 00:16:01.854 } 00:16:01.854 ] 00:16:01.854 }' 00:16:01.854 13:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.854 13:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.421 13:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:16:02.421 13:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:02.421 13:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:02.421 [2024-07-26 13:15:42.938658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:02.421 [2024-07-26 13:15:42.938707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.421 [2024-07-26 13:15:42.938725] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x180cf30 00:16:02.421 [2024-07-26 13:15:42.938737] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.421 [2024-07-26 13:15:42.939045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.421 [2024-07-26 13:15:42.939068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:02.421 [2024-07-26 13:15:42.939124] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:02.421 [2024-07-26 13:15:42.939151] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.421 pt2 00:16:02.680 13:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:02.680 13:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:02.680 13:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:02.680 [2024-07-26 13:15:43.167247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:02.680 [2024-07-26 13:15:43.167280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.680 [2024-07-26 13:15:43.167295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x180bef0 00:16:02.680 [2024-07-26 13:15:43.167305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.680 [2024-07-26 13:15:43.167570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.680 [2024-07-26 13:15:43.167586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:02.680 [2024-07-26 13:15:43.167632] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:02.680 [2024-07-26 13:15:43.167647] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:02.680 [2024-07-26 13:15:43.167741] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1809c20 00:16:02.680 [2024-07-26 13:15:43.167750] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:02.680 [2024-07-26 13:15:43.167902] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1813f40 00:16:02.680 [2024-07-26 13:15:43.168013] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1809c20 00:16:02.680 [2024-07-26 13:15:43.168022] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1809c20 00:16:02.680 [2024-07-26 13:15:43.168105] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.680 pt3 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.680 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.938 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.938 "name": "raid_bdev1", 00:16:02.938 "uuid": "ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3", 00:16:02.938 "strip_size_kb": 64, 00:16:02.938 "state": "online", 00:16:02.938 "raid_level": "concat", 00:16:02.938 "superblock": true, 00:16:02.938 "num_base_bdevs": 3, 00:16:02.938 "num_base_bdevs_discovered": 3, 00:16:02.939 "num_base_bdevs_operational": 3, 00:16:02.939 "base_bdevs_list": [ 00:16:02.939 { 00:16:02.939 "name": "pt1", 00:16:02.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.939 "is_configured": true, 00:16:02.939 "data_offset": 2048, 00:16:02.939 "data_size": 63488 00:16:02.939 }, 00:16:02.939 { 00:16:02.939 "name": "pt2", 00:16:02.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.939 "is_configured": true, 00:16:02.939 "data_offset": 2048, 00:16:02.939 "data_size": 63488 00:16:02.939 }, 00:16:02.939 { 00:16:02.939 "name": "pt3", 00:16:02.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.939 "is_configured": true, 00:16:02.939 "data_offset": 2048, 00:16:02.939 "data_size": 63488 00:16:02.939 } 00:16:02.939 ] 00:16:02.939 }' 00:16:02.939 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.939 13:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.506 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:16:03.506 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:03.506 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:03.506 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:03.506 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:03.506 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:03.506 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:03.506 13:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:03.765 [2024-07-26 13:15:44.045822] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.765 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:03.765 "name": "raid_bdev1", 00:16:03.765 "aliases": [ 00:16:03.765 "ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3" 00:16:03.765 ], 00:16:03.765 "product_name": "Raid Volume", 00:16:03.765 "block_size": 512, 00:16:03.765 "num_blocks": 190464, 00:16:03.765 "uuid": "ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3", 00:16:03.765 "assigned_rate_limits": { 00:16:03.765 "rw_ios_per_sec": 0, 00:16:03.765 "rw_mbytes_per_sec": 0, 00:16:03.765 "r_mbytes_per_sec": 0, 00:16:03.765 "w_mbytes_per_sec": 0 00:16:03.765 }, 00:16:03.765 "claimed": false, 00:16:03.765 "zoned": false, 00:16:03.765 "supported_io_types": { 00:16:03.765 "read": true, 00:16:03.765 "write": true, 00:16:03.765 "unmap": true, 00:16:03.765 "flush": true, 00:16:03.765 "reset": true, 00:16:03.765 "nvme_admin": false, 00:16:03.765 "nvme_io": false, 00:16:03.765 "nvme_io_md": false, 00:16:03.765 "write_zeroes": true, 00:16:03.765 "zcopy": false, 00:16:03.765 "get_zone_info": false, 00:16:03.765 "zone_management": false, 00:16:03.765 "zone_append": false, 00:16:03.765 "compare": false, 00:16:03.765 "compare_and_write": false, 00:16:03.765 "abort": false, 00:16:03.765 "seek_hole": false, 00:16:03.765 "seek_data": false, 00:16:03.765 "copy": false, 00:16:03.765 "nvme_iov_md": false 00:16:03.765 }, 00:16:03.765 "memory_domains": [ 00:16:03.765 { 00:16:03.765 "dma_device_id": "system", 00:16:03.765 "dma_device_type": 1 00:16:03.765 }, 00:16:03.765 { 00:16:03.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.765 "dma_device_type": 2 00:16:03.765 }, 00:16:03.765 { 00:16:03.765 "dma_device_id": "system", 00:16:03.765 "dma_device_type": 1 00:16:03.765 }, 00:16:03.765 { 00:16:03.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.765 "dma_device_type": 2 00:16:03.765 }, 00:16:03.765 { 00:16:03.765 "dma_device_id": "system", 00:16:03.765 "dma_device_type": 1 00:16:03.765 }, 00:16:03.765 { 00:16:03.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.765 "dma_device_type": 2 00:16:03.765 } 00:16:03.765 ], 00:16:03.765 "driver_specific": { 00:16:03.765 "raid": { 00:16:03.765 "uuid": "ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3", 00:16:03.765 "strip_size_kb": 64, 00:16:03.765 "state": "online", 00:16:03.765 "raid_level": "concat", 00:16:03.765 "superblock": true, 00:16:03.765 "num_base_bdevs": 3, 00:16:03.765 "num_base_bdevs_discovered": 3, 00:16:03.765 "num_base_bdevs_operational": 3, 00:16:03.765 "base_bdevs_list": [ 00:16:03.765 { 00:16:03.765 "name": "pt1", 00:16:03.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.765 "is_configured": true, 00:16:03.765 "data_offset": 2048, 00:16:03.765 "data_size": 63488 00:16:03.765 }, 00:16:03.765 { 00:16:03.765 "name": "pt2", 00:16:03.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.765 "is_configured": true, 00:16:03.765 "data_offset": 2048, 00:16:03.765 "data_size": 63488 00:16:03.765 }, 00:16:03.765 { 00:16:03.765 "name": "pt3", 00:16:03.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.765 "is_configured": true, 00:16:03.765 "data_offset": 2048, 00:16:03.765 "data_size": 63488 00:16:03.765 } 00:16:03.765 ] 00:16:03.765 } 00:16:03.765 } 00:16:03.765 }' 00:16:03.765 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.765 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:03.765 pt2 00:16:03.765 pt3' 00:16:03.765 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:03.765 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:03.765 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:04.024 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:04.024 "name": "pt1", 00:16:04.024 "aliases": [ 00:16:04.024 "00000000-0000-0000-0000-000000000001" 00:16:04.024 ], 00:16:04.024 "product_name": "passthru", 00:16:04.024 "block_size": 512, 00:16:04.024 "num_blocks": 65536, 00:16:04.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.024 "assigned_rate_limits": { 00:16:04.024 "rw_ios_per_sec": 0, 00:16:04.024 "rw_mbytes_per_sec": 0, 00:16:04.024 "r_mbytes_per_sec": 0, 00:16:04.024 "w_mbytes_per_sec": 0 00:16:04.024 }, 00:16:04.024 "claimed": true, 00:16:04.024 "claim_type": "exclusive_write", 00:16:04.024 "zoned": false, 00:16:04.024 "supported_io_types": { 00:16:04.024 "read": true, 00:16:04.024 "write": true, 00:16:04.024 "unmap": true, 00:16:04.024 "flush": true, 00:16:04.024 "reset": true, 00:16:04.024 "nvme_admin": false, 00:16:04.024 "nvme_io": false, 00:16:04.024 "nvme_io_md": false, 00:16:04.024 "write_zeroes": true, 00:16:04.024 "zcopy": true, 00:16:04.024 "get_zone_info": false, 00:16:04.024 "zone_management": false, 00:16:04.024 "zone_append": false, 00:16:04.024 "compare": false, 00:16:04.024 "compare_and_write": false, 00:16:04.024 "abort": true, 00:16:04.024 "seek_hole": false, 00:16:04.024 "seek_data": false, 00:16:04.024 "copy": true, 00:16:04.024 "nvme_iov_md": false 00:16:04.024 }, 00:16:04.024 "memory_domains": [ 00:16:04.024 { 00:16:04.024 "dma_device_id": "system", 00:16:04.024 "dma_device_type": 1 00:16:04.024 }, 00:16:04.024 { 00:16:04.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.024 "dma_device_type": 2 00:16:04.024 } 00:16:04.024 ], 00:16:04.025 "driver_specific": { 00:16:04.025 "passthru": { 00:16:04.025 "name": "pt1", 00:16:04.025 "base_bdev_name": "malloc1" 00:16:04.025 } 00:16:04.025 } 00:16:04.025 }' 00:16:04.025 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:04.025 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:04.025 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:04.025 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:04.025 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:04.025 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:04.025 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:04.283 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:04.284 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:04.284 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:04.284 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:04.284 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:04.284 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:04.284 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:04.284 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:04.543 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:04.543 "name": "pt2", 00:16:04.543 "aliases": [ 00:16:04.543 "00000000-0000-0000-0000-000000000002" 00:16:04.543 ], 00:16:04.543 "product_name": "passthru", 00:16:04.543 "block_size": 512, 00:16:04.543 "num_blocks": 65536, 00:16:04.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.543 "assigned_rate_limits": { 00:16:04.543 "rw_ios_per_sec": 0, 00:16:04.543 "rw_mbytes_per_sec": 0, 00:16:04.543 "r_mbytes_per_sec": 0, 00:16:04.543 "w_mbytes_per_sec": 0 00:16:04.543 }, 00:16:04.543 "claimed": true, 00:16:04.543 "claim_type": "exclusive_write", 00:16:04.543 "zoned": false, 00:16:04.543 "supported_io_types": { 00:16:04.543 "read": true, 00:16:04.543 "write": true, 00:16:04.543 "unmap": true, 00:16:04.543 "flush": true, 00:16:04.543 "reset": true, 00:16:04.543 "nvme_admin": false, 00:16:04.543 "nvme_io": false, 00:16:04.543 "nvme_io_md": false, 00:16:04.543 "write_zeroes": true, 00:16:04.543 "zcopy": true, 00:16:04.543 "get_zone_info": false, 00:16:04.543 "zone_management": false, 00:16:04.543 "zone_append": false, 00:16:04.543 "compare": false, 00:16:04.543 "compare_and_write": false, 00:16:04.543 "abort": true, 00:16:04.543 "seek_hole": false, 00:16:04.543 "seek_data": false, 00:16:04.543 "copy": true, 00:16:04.543 "nvme_iov_md": false 00:16:04.543 }, 00:16:04.543 "memory_domains": [ 00:16:04.543 { 00:16:04.543 "dma_device_id": "system", 00:16:04.543 "dma_device_type": 1 00:16:04.543 }, 00:16:04.543 { 00:16:04.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.543 "dma_device_type": 2 00:16:04.543 } 00:16:04.543 ], 00:16:04.543 "driver_specific": { 00:16:04.543 "passthru": { 00:16:04.543 "name": "pt2", 00:16:04.543 "base_bdev_name": "malloc2" 00:16:04.543 } 00:16:04.543 } 00:16:04.543 }' 00:16:04.543 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:04.543 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:04.543 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:04.543 13:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:04.543 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:04.543 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:04.543 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:04.802 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:04.802 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:04.802 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:04.802 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:04.802 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:04.802 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:04.802 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:04.802 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:05.061 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:05.061 "name": "pt3", 00:16:05.061 "aliases": [ 00:16:05.061 "00000000-0000-0000-0000-000000000003" 00:16:05.061 ], 00:16:05.061 "product_name": "passthru", 00:16:05.061 "block_size": 512, 00:16:05.061 "num_blocks": 65536, 00:16:05.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.061 "assigned_rate_limits": { 00:16:05.061 "rw_ios_per_sec": 0, 00:16:05.061 "rw_mbytes_per_sec": 0, 00:16:05.061 "r_mbytes_per_sec": 0, 00:16:05.061 "w_mbytes_per_sec": 0 00:16:05.061 }, 00:16:05.061 "claimed": true, 00:16:05.061 "claim_type": "exclusive_write", 00:16:05.061 "zoned": false, 00:16:05.061 "supported_io_types": { 00:16:05.061 "read": true, 00:16:05.061 "write": true, 00:16:05.061 "unmap": true, 00:16:05.061 "flush": true, 00:16:05.061 "reset": true, 00:16:05.061 "nvme_admin": false, 00:16:05.061 "nvme_io": false, 00:16:05.061 "nvme_io_md": false, 00:16:05.061 "write_zeroes": true, 00:16:05.061 "zcopy": true, 00:16:05.061 "get_zone_info": false, 00:16:05.061 "zone_management": false, 00:16:05.061 "zone_append": false, 00:16:05.061 "compare": false, 00:16:05.061 "compare_and_write": false, 00:16:05.061 "abort": true, 00:16:05.061 "seek_hole": false, 00:16:05.061 "seek_data": false, 00:16:05.061 "copy": true, 00:16:05.061 "nvme_iov_md": false 00:16:05.061 }, 00:16:05.061 "memory_domains": [ 00:16:05.061 { 00:16:05.061 "dma_device_id": "system", 00:16:05.061 "dma_device_type": 1 00:16:05.061 }, 00:16:05.061 { 00:16:05.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.061 "dma_device_type": 2 00:16:05.061 } 00:16:05.061 ], 00:16:05.061 "driver_specific": { 00:16:05.061 "passthru": { 00:16:05.061 "name": "pt3", 00:16:05.061 "base_bdev_name": "malloc3" 00:16:05.061 } 00:16:05.061 } 00:16:05.061 }' 00:16:05.061 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.061 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.061 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:05.061 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.320 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.320 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:05.320 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.320 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.320 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:05.320 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.320 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.320 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:05.320 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:05.320 13:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:16:05.579 [2024-07-26 13:15:46.007015] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3 '!=' ac7b2464-1f5e-4c4d-a6c6-ad08ca178af3 ']' 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 705111 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 705111 ']' 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 705111 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 705111 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 705111' 00:16:05.579 killing process with pid 705111 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 705111 00:16:05.579 [2024-07-26 13:15:46.086217] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.579 [2024-07-26 13:15:46.086267] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.579 [2024-07-26 13:15:46.086316] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.579 [2024-07-26 13:15:46.086327] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1809c20 name raid_bdev1, state offline 00:16:05.579 13:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 705111 00:16:05.838 [2024-07-26 13:15:46.109932] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:05.838 13:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:16:05.838 00:16:05.838 real 0m13.238s 00:16:05.838 user 0m23.771s 00:16:05.838 sys 0m2.449s 00:16:05.838 13:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:05.838 13:15:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.838 ************************************ 00:16:05.838 END TEST raid_superblock_test 00:16:05.838 ************************************ 00:16:05.838 13:15:46 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:16:05.838 13:15:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:05.838 13:15:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.838 13:15:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:06.097 ************************************ 00:16:06.097 START TEST raid_read_error_test 00:16:06.097 ************************************ 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.xjTZU33H8p 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=707767 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 707767 /var/tmp/spdk-raid.sock 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 707767 ']' 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:06.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:06.097 13:15:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.097 [2024-07-26 13:15:46.459481] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:16:06.097 [2024-07-26 13:15:46.459540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid707767 ] 00:16:06.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.097 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:06.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.097 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:06.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.097 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:06.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.097 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:06.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:06.098 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:06.098 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:06.098 [2024-07-26 13:15:46.589520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.356 [2024-07-26 13:15:46.676510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.356 [2024-07-26 13:15:46.735041] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.356 [2024-07-26 13:15:46.735081] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:06.923 13:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.923 13:15:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:06.923 13:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:06.923 13:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:07.182 BaseBdev1_malloc 00:16:07.182 13:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:07.441 true 00:16:07.441 13:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:07.700 [2024-07-26 13:15:48.020294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:07.700 [2024-07-26 13:15:48.020331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.700 [2024-07-26 13:15:48.020348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeef190 00:16:07.700 [2024-07-26 13:15:48.020360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.700 [2024-07-26 13:15:48.021843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.700 [2024-07-26 13:15:48.021871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:07.700 BaseBdev1 00:16:07.700 13:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:07.700 13:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:07.959 BaseBdev2_malloc 00:16:07.959 13:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:08.218 true 00:16:08.218 13:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:08.218 [2024-07-26 13:15:48.714437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:08.218 [2024-07-26 13:15:48.714476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.218 [2024-07-26 13:15:48.714493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xef3e20 00:16:08.218 [2024-07-26 13:15:48.714505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.218 [2024-07-26 13:15:48.715877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.218 [2024-07-26 13:15:48.715903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:08.218 BaseBdev2 00:16:08.218 13:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:08.218 13:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:08.477 BaseBdev3_malloc 00:16:08.477 13:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:08.785 true 00:16:08.785 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:09.057 [2024-07-26 13:15:49.388398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:09.057 [2024-07-26 13:15:49.388439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.057 [2024-07-26 13:15:49.388460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xef4d90 00:16:09.057 [2024-07-26 13:15:49.388472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.057 [2024-07-26 13:15:49.389855] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.057 [2024-07-26 13:15:49.389882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:09.057 BaseBdev3 00:16:09.057 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:09.316 [2024-07-26 13:15:49.613024] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.316 [2024-07-26 13:15:49.614204] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.316 [2024-07-26 13:15:49.614268] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.316 [2024-07-26 13:15:49.614440] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xef6ba0 00:16:09.316 [2024-07-26 13:15:49.614450] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:09.316 [2024-07-26 13:15:49.614629] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xefaf30 00:16:09.316 [2024-07-26 13:15:49.614764] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xef6ba0 00:16:09.316 [2024-07-26 13:15:49.614773] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xef6ba0 00:16:09.316 [2024-07-26 13:15:49.614879] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.316 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:09.316 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:09.316 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:09.316 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:09.316 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:09.316 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:09.316 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.316 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.316 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.316 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.316 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.316 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.576 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.576 "name": "raid_bdev1", 00:16:09.576 "uuid": "5d339a8c-7efb-43bd-a602-4b077bc4124e", 00:16:09.576 "strip_size_kb": 64, 00:16:09.576 "state": "online", 00:16:09.576 "raid_level": "concat", 00:16:09.576 "superblock": true, 00:16:09.576 "num_base_bdevs": 3, 00:16:09.576 "num_base_bdevs_discovered": 3, 00:16:09.576 "num_base_bdevs_operational": 3, 00:16:09.576 "base_bdevs_list": [ 00:16:09.576 { 00:16:09.576 "name": "BaseBdev1", 00:16:09.576 "uuid": "bb679807-c7ad-5298-b735-a1961a21f461", 00:16:09.576 "is_configured": true, 00:16:09.576 "data_offset": 2048, 00:16:09.576 "data_size": 63488 00:16:09.576 }, 00:16:09.576 { 00:16:09.576 "name": "BaseBdev2", 00:16:09.576 "uuid": "981b8c6b-5dd1-52f9-959b-6bf034580e07", 00:16:09.576 "is_configured": true, 00:16:09.576 "data_offset": 2048, 00:16:09.576 "data_size": 63488 00:16:09.576 }, 00:16:09.576 { 00:16:09.576 "name": "BaseBdev3", 00:16:09.576 "uuid": "9c120a5b-d38c-5e88-88de-a6484b9dd84a", 00:16:09.576 "is_configured": true, 00:16:09.576 "data_offset": 2048, 00:16:09.576 "data_size": 63488 00:16:09.576 } 00:16:09.576 ] 00:16:09.576 }' 00:16:09.576 13:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.576 13:15:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.145 13:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:10.145 13:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:10.145 [2024-07-26 13:15:50.535700] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xef7f70 00:16:11.083 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.343 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.603 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.603 "name": "raid_bdev1", 00:16:11.603 "uuid": "5d339a8c-7efb-43bd-a602-4b077bc4124e", 00:16:11.603 "strip_size_kb": 64, 00:16:11.603 "state": "online", 00:16:11.603 "raid_level": "concat", 00:16:11.603 "superblock": true, 00:16:11.603 "num_base_bdevs": 3, 00:16:11.603 "num_base_bdevs_discovered": 3, 00:16:11.603 "num_base_bdevs_operational": 3, 00:16:11.603 "base_bdevs_list": [ 00:16:11.603 { 00:16:11.603 "name": "BaseBdev1", 00:16:11.603 "uuid": "bb679807-c7ad-5298-b735-a1961a21f461", 00:16:11.603 "is_configured": true, 00:16:11.603 "data_offset": 2048, 00:16:11.603 "data_size": 63488 00:16:11.603 }, 00:16:11.603 { 00:16:11.603 "name": "BaseBdev2", 00:16:11.603 "uuid": "981b8c6b-5dd1-52f9-959b-6bf034580e07", 00:16:11.603 "is_configured": true, 00:16:11.603 "data_offset": 2048, 00:16:11.603 "data_size": 63488 00:16:11.603 }, 00:16:11.603 { 00:16:11.603 "name": "BaseBdev3", 00:16:11.603 "uuid": "9c120a5b-d38c-5e88-88de-a6484b9dd84a", 00:16:11.603 "is_configured": true, 00:16:11.603 "data_offset": 2048, 00:16:11.603 "data_size": 63488 00:16:11.603 } 00:16:11.603 ] 00:16:11.603 }' 00:16:11.603 13:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.603 13:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.172 13:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:12.172 [2024-07-26 13:15:52.683064] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.172 [2024-07-26 13:15:52.683102] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.172 [2024-07-26 13:15:52.686017] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.172 [2024-07-26 13:15:52.686049] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.172 [2024-07-26 13:15:52.686078] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.172 [2024-07-26 13:15:52.686088] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xef6ba0 name raid_bdev1, state offline 00:16:12.172 0 00:16:12.431 13:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 707767 00:16:12.432 13:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 707767 ']' 00:16:12.432 13:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 707767 00:16:12.432 13:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:12.432 13:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:12.432 13:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 707767 00:16:12.432 13:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:12.432 13:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:12.432 13:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 707767' 00:16:12.432 killing process with pid 707767 00:16:12.432 13:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 707767 00:16:12.432 [2024-07-26 13:15:52.762537] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:12.432 13:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 707767 00:16:12.432 [2024-07-26 13:15:52.780382] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.691 13:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:12.691 13:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.xjTZU33H8p 00:16:12.691 13:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:12.691 13:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:16:12.691 13:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:16:12.691 13:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:12.691 13:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:12.691 13:15:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:16:12.691 00:16:12.691 real 0m6.600s 00:16:12.691 user 0m10.414s 00:16:12.691 sys 0m1.155s 00:16:12.691 13:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:12.691 13:15:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.691 ************************************ 00:16:12.691 END TEST raid_read_error_test 00:16:12.691 ************************************ 00:16:12.691 13:15:53 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:16:12.691 13:15:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:12.691 13:15:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:12.691 13:15:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.691 ************************************ 00:16:12.691 START TEST raid_write_error_test 00:16:12.691 ************************************ 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.oLQFZzgGhL 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=708928 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 708928 /var/tmp/spdk-raid.sock 00:16:12.691 13:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 708928 ']' 00:16:12.692 13:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:12.692 13:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:12.692 13:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:12.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:12.692 13:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:12.692 13:15:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.692 13:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:12.692 [2024-07-26 13:15:53.132567] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:16:12.692 [2024-07-26 13:15:53.132624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid708928 ] 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:12.692 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:12.692 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:12.950 [2024-07-26 13:15:53.263019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.950 [2024-07-26 13:15:53.349583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.950 [2024-07-26 13:15:53.408852] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.950 [2024-07-26 13:15:53.408885] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.518 13:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:13.518 13:15:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:13.518 13:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:13.518 13:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:13.777 BaseBdev1_malloc 00:16:13.777 13:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:14.346 true 00:16:14.346 13:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:14.605 [2024-07-26 13:15:54.939340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:14.605 [2024-07-26 13:15:54.939381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.605 [2024-07-26 13:15:54.939399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26bd190 00:16:14.605 [2024-07-26 13:15:54.939410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.605 [2024-07-26 13:15:54.941008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.605 [2024-07-26 13:15:54.941036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:14.605 BaseBdev1 00:16:14.605 13:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:14.605 13:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:14.865 BaseBdev2_malloc 00:16:14.865 13:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:15.434 true 00:16:15.434 13:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:15.434 [2024-07-26 13:15:55.882028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:15.434 [2024-07-26 13:15:55.882067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.434 [2024-07-26 13:15:55.882085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26c1e20 00:16:15.434 [2024-07-26 13:15:55.882097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.434 [2024-07-26 13:15:55.883476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.434 [2024-07-26 13:15:55.883503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:15.434 BaseBdev2 00:16:15.434 13:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:15.434 13:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:15.694 BaseBdev3_malloc 00:16:15.694 13:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:16.263 true 00:16:16.263 13:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:16.523 [2024-07-26 13:15:56.824803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:16.523 [2024-07-26 13:15:56.824843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.523 [2024-07-26 13:15:56.824864] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26c2d90 00:16:16.523 [2024-07-26 13:15:56.824876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.523 [2024-07-26 13:15:56.826271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.523 [2024-07-26 13:15:56.826298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:16.523 BaseBdev3 00:16:16.523 13:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:16:16.523 [2024-07-26 13:15:57.037391] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.523 [2024-07-26 13:15:57.038521] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.523 [2024-07-26 13:15:57.038586] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:16.523 [2024-07-26 13:15:57.038755] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x26c4ba0 00:16:16.523 [2024-07-26 13:15:57.038766] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:16.523 [2024-07-26 13:15:57.038939] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26c8f30 00:16:16.523 [2024-07-26 13:15:57.039072] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26c4ba0 00:16:16.523 [2024-07-26 13:15:57.039081] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26c4ba0 00:16:16.523 [2024-07-26 13:15:57.039197] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.782 "name": "raid_bdev1", 00:16:16.782 "uuid": "96f098af-cd8e-40ea-b4b9-b9d640b65ac7", 00:16:16.782 "strip_size_kb": 64, 00:16:16.782 "state": "online", 00:16:16.782 "raid_level": "concat", 00:16:16.782 "superblock": true, 00:16:16.782 "num_base_bdevs": 3, 00:16:16.782 "num_base_bdevs_discovered": 3, 00:16:16.782 "num_base_bdevs_operational": 3, 00:16:16.782 "base_bdevs_list": [ 00:16:16.782 { 00:16:16.782 "name": "BaseBdev1", 00:16:16.782 "uuid": "fb834ee7-a77f-587c-abeb-9191f5d5ba2f", 00:16:16.782 "is_configured": true, 00:16:16.782 "data_offset": 2048, 00:16:16.782 "data_size": 63488 00:16:16.782 }, 00:16:16.782 { 00:16:16.782 "name": "BaseBdev2", 00:16:16.782 "uuid": "67ef6d84-707a-51a4-8edb-a8648ea01867", 00:16:16.782 "is_configured": true, 00:16:16.782 "data_offset": 2048, 00:16:16.782 "data_size": 63488 00:16:16.782 }, 00:16:16.782 { 00:16:16.782 "name": "BaseBdev3", 00:16:16.782 "uuid": "98b0d54d-6125-5c7e-802b-53ff0571b86c", 00:16:16.782 "is_configured": true, 00:16:16.782 "data_offset": 2048, 00:16:16.782 "data_size": 63488 00:16:16.782 } 00:16:16.782 ] 00:16:16.782 }' 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.782 13:15:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.351 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:17.351 13:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:17.610 [2024-07-26 13:15:57.952031] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26c5f70 00:16:18.548 13:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.808 "name": "raid_bdev1", 00:16:18.808 "uuid": "96f098af-cd8e-40ea-b4b9-b9d640b65ac7", 00:16:18.808 "strip_size_kb": 64, 00:16:18.808 "state": "online", 00:16:18.808 "raid_level": "concat", 00:16:18.808 "superblock": true, 00:16:18.808 "num_base_bdevs": 3, 00:16:18.808 "num_base_bdevs_discovered": 3, 00:16:18.808 "num_base_bdevs_operational": 3, 00:16:18.808 "base_bdevs_list": [ 00:16:18.808 { 00:16:18.808 "name": "BaseBdev1", 00:16:18.808 "uuid": "fb834ee7-a77f-587c-abeb-9191f5d5ba2f", 00:16:18.808 "is_configured": true, 00:16:18.808 "data_offset": 2048, 00:16:18.808 "data_size": 63488 00:16:18.808 }, 00:16:18.808 { 00:16:18.808 "name": "BaseBdev2", 00:16:18.808 "uuid": "67ef6d84-707a-51a4-8edb-a8648ea01867", 00:16:18.808 "is_configured": true, 00:16:18.808 "data_offset": 2048, 00:16:18.808 "data_size": 63488 00:16:18.808 }, 00:16:18.808 { 00:16:18.808 "name": "BaseBdev3", 00:16:18.808 "uuid": "98b0d54d-6125-5c7e-802b-53ff0571b86c", 00:16:18.808 "is_configured": true, 00:16:18.808 "data_offset": 2048, 00:16:18.808 "data_size": 63488 00:16:18.808 } 00:16:18.808 ] 00:16:18.808 }' 00:16:18.808 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.809 13:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 13:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:19.637 [2024-07-26 13:16:00.102005] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.637 [2024-07-26 13:16:00.102037] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.637 [2024-07-26 13:16:00.104960] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.637 [2024-07-26 13:16:00.104993] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.637 [2024-07-26 13:16:00.105023] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.637 [2024-07-26 13:16:00.105033] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26c4ba0 name raid_bdev1, state offline 00:16:19.637 0 00:16:19.637 13:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 708928 00:16:19.637 13:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 708928 ']' 00:16:19.637 13:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 708928 00:16:19.637 13:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:19.637 13:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:19.637 13:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 708928 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 708928' 00:16:19.897 killing process with pid 708928 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 708928 00:16:19.897 [2024-07-26 13:16:00.179050] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 708928 00:16:19.897 [2024-07-26 13:16:00.198455] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.oLQFZzgGhL 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:16:19.897 00:16:19.897 real 0m7.342s 00:16:19.897 user 0m11.788s 00:16:19.897 sys 0m1.242s 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.897 13:16:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.897 ************************************ 00:16:19.897 END TEST raid_write_error_test 00:16:19.897 ************************************ 00:16:20.157 13:16:00 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:16:20.157 13:16:00 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:20.157 13:16:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:20.157 13:16:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.157 13:16:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.157 ************************************ 00:16:20.157 START TEST raid_state_function_test 00:16:20.157 ************************************ 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=710345 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 710345' 00:16:20.157 Process raid pid: 710345 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 710345 /var/tmp/spdk-raid.sock 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 710345 ']' 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:20.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:20.157 13:16:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.157 [2024-07-26 13:16:00.528222] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:16:20.157 [2024-07-26 13:16:00.528265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:20.157 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:20.157 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:20.157 [2024-07-26 13:16:00.644790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.417 [2024-07-26 13:16:00.733116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.417 [2024-07-26 13:16:00.790025] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.417 [2024-07-26 13:16:00.790054] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:20.676 [2024-07-26 13:16:01.154391] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.676 [2024-07-26 13:16:01.154432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.676 [2024-07-26 13:16:01.154444] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.676 [2024-07-26 13:16:01.154458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.676 [2024-07-26 13:16:01.154467] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:20.676 [2024-07-26 13:16:01.154481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.676 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.244 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.244 "name": "Existed_Raid", 00:16:21.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.244 "strip_size_kb": 0, 00:16:21.244 "state": "configuring", 00:16:21.244 "raid_level": "raid1", 00:16:21.244 "superblock": false, 00:16:21.244 "num_base_bdevs": 3, 00:16:21.244 "num_base_bdevs_discovered": 0, 00:16:21.244 "num_base_bdevs_operational": 3, 00:16:21.244 "base_bdevs_list": [ 00:16:21.244 { 00:16:21.244 "name": "BaseBdev1", 00:16:21.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.244 "is_configured": false, 00:16:21.244 "data_offset": 0, 00:16:21.244 "data_size": 0 00:16:21.244 }, 00:16:21.244 { 00:16:21.244 "name": "BaseBdev2", 00:16:21.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.244 "is_configured": false, 00:16:21.244 "data_offset": 0, 00:16:21.244 "data_size": 0 00:16:21.244 }, 00:16:21.244 { 00:16:21.244 "name": "BaseBdev3", 00:16:21.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.244 "is_configured": false, 00:16:21.244 "data_offset": 0, 00:16:21.244 "data_size": 0 00:16:21.244 } 00:16:21.244 ] 00:16:21.244 }' 00:16:21.244 13:16:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.244 13:16:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.812 13:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:22.085 [2024-07-26 13:16:02.429633] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:22.085 [2024-07-26 13:16:02.429665] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1602f40 name Existed_Raid, state configuring 00:16:22.085 13:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:22.349 [2024-07-26 13:16:02.658248] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:22.349 [2024-07-26 13:16:02.658282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:22.349 [2024-07-26 13:16:02.658292] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.349 [2024-07-26 13:16:02.658303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.349 [2024-07-26 13:16:02.658311] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:22.349 [2024-07-26 13:16:02.658321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:22.349 13:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:22.607 [2024-07-26 13:16:02.896324] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.607 BaseBdev1 00:16:22.607 13:16:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:22.607 13:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:22.607 13:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:22.607 13:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:22.607 13:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:22.607 13:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:22.607 13:16:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:22.866 13:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:22.866 [ 00:16:22.866 { 00:16:22.866 "name": "BaseBdev1", 00:16:22.866 "aliases": [ 00:16:22.866 "544f528d-2de2-4b51-89f0-a683a3cdfc7a" 00:16:22.866 ], 00:16:22.866 "product_name": "Malloc disk", 00:16:22.866 "block_size": 512, 00:16:22.866 "num_blocks": 65536, 00:16:22.866 "uuid": "544f528d-2de2-4b51-89f0-a683a3cdfc7a", 00:16:22.866 "assigned_rate_limits": { 00:16:22.866 "rw_ios_per_sec": 0, 00:16:22.866 "rw_mbytes_per_sec": 0, 00:16:22.866 "r_mbytes_per_sec": 0, 00:16:22.866 "w_mbytes_per_sec": 0 00:16:22.866 }, 00:16:22.866 "claimed": true, 00:16:22.866 "claim_type": "exclusive_write", 00:16:22.866 "zoned": false, 00:16:22.866 "supported_io_types": { 00:16:22.866 "read": true, 00:16:22.866 "write": true, 00:16:22.866 "unmap": true, 00:16:22.866 "flush": true, 00:16:22.866 "reset": true, 00:16:22.866 "nvme_admin": false, 00:16:22.866 "nvme_io": false, 00:16:22.866 "nvme_io_md": false, 00:16:22.866 "write_zeroes": true, 00:16:22.866 "zcopy": true, 00:16:22.866 "get_zone_info": false, 00:16:22.866 "zone_management": false, 00:16:22.866 "zone_append": false, 00:16:22.866 "compare": false, 00:16:22.866 "compare_and_write": false, 00:16:22.866 "abort": true, 00:16:22.866 "seek_hole": false, 00:16:22.866 "seek_data": false, 00:16:22.866 "copy": true, 00:16:22.866 "nvme_iov_md": false 00:16:22.866 }, 00:16:22.866 "memory_domains": [ 00:16:22.866 { 00:16:22.867 "dma_device_id": "system", 00:16:22.867 "dma_device_type": 1 00:16:22.867 }, 00:16:22.867 { 00:16:22.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.867 "dma_device_type": 2 00:16:22.867 } 00:16:22.867 ], 00:16:22.867 "driver_specific": {} 00:16:22.867 } 00:16:22.867 ] 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.867 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.126 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.126 "name": "Existed_Raid", 00:16:23.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.126 "strip_size_kb": 0, 00:16:23.126 "state": "configuring", 00:16:23.126 "raid_level": "raid1", 00:16:23.126 "superblock": false, 00:16:23.126 "num_base_bdevs": 3, 00:16:23.126 "num_base_bdevs_discovered": 1, 00:16:23.126 "num_base_bdevs_operational": 3, 00:16:23.126 "base_bdevs_list": [ 00:16:23.126 { 00:16:23.126 "name": "BaseBdev1", 00:16:23.126 "uuid": "544f528d-2de2-4b51-89f0-a683a3cdfc7a", 00:16:23.126 "is_configured": true, 00:16:23.126 "data_offset": 0, 00:16:23.126 "data_size": 65536 00:16:23.126 }, 00:16:23.126 { 00:16:23.126 "name": "BaseBdev2", 00:16:23.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.126 "is_configured": false, 00:16:23.126 "data_offset": 0, 00:16:23.126 "data_size": 0 00:16:23.126 }, 00:16:23.126 { 00:16:23.126 "name": "BaseBdev3", 00:16:23.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.126 "is_configured": false, 00:16:23.127 "data_offset": 0, 00:16:23.127 "data_size": 0 00:16:23.127 } 00:16:23.127 ] 00:16:23.127 }' 00:16:23.127 13:16:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.127 13:16:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.064 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:24.323 [2024-07-26 13:16:04.596814] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:24.323 [2024-07-26 13:16:04.596853] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1602810 name Existed_Raid, state configuring 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:24.323 [2024-07-26 13:16:04.825444] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.323 [2024-07-26 13:16:04.826859] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.323 [2024-07-26 13:16:04.826891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.323 [2024-07-26 13:16:04.826902] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.323 [2024-07-26 13:16:04.826913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.323 13:16:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.582 13:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.582 "name": "Existed_Raid", 00:16:24.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.582 "strip_size_kb": 0, 00:16:24.582 "state": "configuring", 00:16:24.582 "raid_level": "raid1", 00:16:24.582 "superblock": false, 00:16:24.582 "num_base_bdevs": 3, 00:16:24.582 "num_base_bdevs_discovered": 1, 00:16:24.582 "num_base_bdevs_operational": 3, 00:16:24.582 "base_bdevs_list": [ 00:16:24.582 { 00:16:24.582 "name": "BaseBdev1", 00:16:24.582 "uuid": "544f528d-2de2-4b51-89f0-a683a3cdfc7a", 00:16:24.582 "is_configured": true, 00:16:24.582 "data_offset": 0, 00:16:24.582 "data_size": 65536 00:16:24.582 }, 00:16:24.582 { 00:16:24.582 "name": "BaseBdev2", 00:16:24.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.582 "is_configured": false, 00:16:24.582 "data_offset": 0, 00:16:24.582 "data_size": 0 00:16:24.582 }, 00:16:24.582 { 00:16:24.582 "name": "BaseBdev3", 00:16:24.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.582 "is_configured": false, 00:16:24.582 "data_offset": 0, 00:16:24.582 "data_size": 0 00:16:24.582 } 00:16:24.582 ] 00:16:24.582 }' 00:16:24.582 13:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.582 13:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.150 13:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:25.409 [2024-07-26 13:16:05.827217] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.409 BaseBdev2 00:16:25.409 13:16:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:25.409 13:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:25.409 13:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:25.409 13:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:25.409 13:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:25.409 13:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:25.409 13:16:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:25.668 13:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:25.927 [ 00:16:25.927 { 00:16:25.927 "name": "BaseBdev2", 00:16:25.927 "aliases": [ 00:16:25.927 "4dcf0b43-07ec-46f1-9fd7-0b025aa92313" 00:16:25.927 ], 00:16:25.927 "product_name": "Malloc disk", 00:16:25.927 "block_size": 512, 00:16:25.927 "num_blocks": 65536, 00:16:25.927 "uuid": "4dcf0b43-07ec-46f1-9fd7-0b025aa92313", 00:16:25.927 "assigned_rate_limits": { 00:16:25.927 "rw_ios_per_sec": 0, 00:16:25.927 "rw_mbytes_per_sec": 0, 00:16:25.927 "r_mbytes_per_sec": 0, 00:16:25.927 "w_mbytes_per_sec": 0 00:16:25.927 }, 00:16:25.927 "claimed": true, 00:16:25.927 "claim_type": "exclusive_write", 00:16:25.927 "zoned": false, 00:16:25.927 "supported_io_types": { 00:16:25.927 "read": true, 00:16:25.927 "write": true, 00:16:25.927 "unmap": true, 00:16:25.927 "flush": true, 00:16:25.927 "reset": true, 00:16:25.927 "nvme_admin": false, 00:16:25.927 "nvme_io": false, 00:16:25.927 "nvme_io_md": false, 00:16:25.927 "write_zeroes": true, 00:16:25.927 "zcopy": true, 00:16:25.927 "get_zone_info": false, 00:16:25.927 "zone_management": false, 00:16:25.927 "zone_append": false, 00:16:25.927 "compare": false, 00:16:25.927 "compare_and_write": false, 00:16:25.927 "abort": true, 00:16:25.927 "seek_hole": false, 00:16:25.927 "seek_data": false, 00:16:25.927 "copy": true, 00:16:25.927 "nvme_iov_md": false 00:16:25.927 }, 00:16:25.927 "memory_domains": [ 00:16:25.927 { 00:16:25.927 "dma_device_id": "system", 00:16:25.927 "dma_device_type": 1 00:16:25.927 }, 00:16:25.927 { 00:16:25.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.927 "dma_device_type": 2 00:16:25.927 } 00:16:25.927 ], 00:16:25.927 "driver_specific": {} 00:16:25.927 } 00:16:25.927 ] 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.927 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.186 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.186 "name": "Existed_Raid", 00:16:26.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.186 "strip_size_kb": 0, 00:16:26.186 "state": "configuring", 00:16:26.186 "raid_level": "raid1", 00:16:26.186 "superblock": false, 00:16:26.186 "num_base_bdevs": 3, 00:16:26.186 "num_base_bdevs_discovered": 2, 00:16:26.186 "num_base_bdevs_operational": 3, 00:16:26.186 "base_bdevs_list": [ 00:16:26.186 { 00:16:26.186 "name": "BaseBdev1", 00:16:26.186 "uuid": "544f528d-2de2-4b51-89f0-a683a3cdfc7a", 00:16:26.186 "is_configured": true, 00:16:26.186 "data_offset": 0, 00:16:26.186 "data_size": 65536 00:16:26.186 }, 00:16:26.186 { 00:16:26.186 "name": "BaseBdev2", 00:16:26.186 "uuid": "4dcf0b43-07ec-46f1-9fd7-0b025aa92313", 00:16:26.186 "is_configured": true, 00:16:26.186 "data_offset": 0, 00:16:26.186 "data_size": 65536 00:16:26.186 }, 00:16:26.186 { 00:16:26.186 "name": "BaseBdev3", 00:16:26.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.186 "is_configured": false, 00:16:26.186 "data_offset": 0, 00:16:26.187 "data_size": 0 00:16:26.187 } 00:16:26.187 ] 00:16:26.187 }' 00:16:26.187 13:16:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.187 13:16:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.755 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:27.013 [2024-07-26 13:16:07.294217] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:27.013 [2024-07-26 13:16:07.294252] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1603710 00:16:27.013 [2024-07-26 13:16:07.294264] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:27.013 [2024-07-26 13:16:07.294444] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16033e0 00:16:27.013 [2024-07-26 13:16:07.294560] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1603710 00:16:27.013 [2024-07-26 13:16:07.294569] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1603710 00:16:27.013 [2024-07-26 13:16:07.294716] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.013 BaseBdev3 00:16:27.013 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:27.014 13:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:27.014 13:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:27.014 13:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:27.014 13:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:27.014 13:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:27.014 13:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:27.014 13:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:27.273 [ 00:16:27.273 { 00:16:27.273 "name": "BaseBdev3", 00:16:27.273 "aliases": [ 00:16:27.273 "b7a23934-28ea-4da7-97d8-a0d9dedec226" 00:16:27.273 ], 00:16:27.273 "product_name": "Malloc disk", 00:16:27.273 "block_size": 512, 00:16:27.273 "num_blocks": 65536, 00:16:27.273 "uuid": "b7a23934-28ea-4da7-97d8-a0d9dedec226", 00:16:27.273 "assigned_rate_limits": { 00:16:27.273 "rw_ios_per_sec": 0, 00:16:27.273 "rw_mbytes_per_sec": 0, 00:16:27.273 "r_mbytes_per_sec": 0, 00:16:27.273 "w_mbytes_per_sec": 0 00:16:27.273 }, 00:16:27.273 "claimed": true, 00:16:27.273 "claim_type": "exclusive_write", 00:16:27.273 "zoned": false, 00:16:27.273 "supported_io_types": { 00:16:27.273 "read": true, 00:16:27.273 "write": true, 00:16:27.273 "unmap": true, 00:16:27.273 "flush": true, 00:16:27.273 "reset": true, 00:16:27.273 "nvme_admin": false, 00:16:27.273 "nvme_io": false, 00:16:27.273 "nvme_io_md": false, 00:16:27.273 "write_zeroes": true, 00:16:27.273 "zcopy": true, 00:16:27.273 "get_zone_info": false, 00:16:27.273 "zone_management": false, 00:16:27.273 "zone_append": false, 00:16:27.273 "compare": false, 00:16:27.273 "compare_and_write": false, 00:16:27.273 "abort": true, 00:16:27.273 "seek_hole": false, 00:16:27.273 "seek_data": false, 00:16:27.273 "copy": true, 00:16:27.273 "nvme_iov_md": false 00:16:27.273 }, 00:16:27.273 "memory_domains": [ 00:16:27.273 { 00:16:27.273 "dma_device_id": "system", 00:16:27.273 "dma_device_type": 1 00:16:27.273 }, 00:16:27.273 { 00:16:27.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.273 "dma_device_type": 2 00:16:27.273 } 00:16:27.273 ], 00:16:27.273 "driver_specific": {} 00:16:27.273 } 00:16:27.273 ] 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.273 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.532 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:27.532 "name": "Existed_Raid", 00:16:27.532 "uuid": "f4c693de-9729-4149-8335-63a55a0f13a9", 00:16:27.532 "strip_size_kb": 0, 00:16:27.532 "state": "online", 00:16:27.532 "raid_level": "raid1", 00:16:27.532 "superblock": false, 00:16:27.532 "num_base_bdevs": 3, 00:16:27.532 "num_base_bdevs_discovered": 3, 00:16:27.532 "num_base_bdevs_operational": 3, 00:16:27.532 "base_bdevs_list": [ 00:16:27.532 { 00:16:27.532 "name": "BaseBdev1", 00:16:27.532 "uuid": "544f528d-2de2-4b51-89f0-a683a3cdfc7a", 00:16:27.532 "is_configured": true, 00:16:27.532 "data_offset": 0, 00:16:27.532 "data_size": 65536 00:16:27.532 }, 00:16:27.532 { 00:16:27.532 "name": "BaseBdev2", 00:16:27.532 "uuid": "4dcf0b43-07ec-46f1-9fd7-0b025aa92313", 00:16:27.532 "is_configured": true, 00:16:27.532 "data_offset": 0, 00:16:27.532 "data_size": 65536 00:16:27.532 }, 00:16:27.532 { 00:16:27.532 "name": "BaseBdev3", 00:16:27.532 "uuid": "b7a23934-28ea-4da7-97d8-a0d9dedec226", 00:16:27.532 "is_configured": true, 00:16:27.532 "data_offset": 0, 00:16:27.532 "data_size": 65536 00:16:27.532 } 00:16:27.532 ] 00:16:27.532 }' 00:16:27.532 13:16:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:27.532 13:16:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.101 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:28.101 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:28.101 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:28.101 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:28.101 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:28.101 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:28.101 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:28.101 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:28.360 [2024-07-26 13:16:08.778415] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.360 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:28.360 "name": "Existed_Raid", 00:16:28.360 "aliases": [ 00:16:28.360 "f4c693de-9729-4149-8335-63a55a0f13a9" 00:16:28.360 ], 00:16:28.360 "product_name": "Raid Volume", 00:16:28.360 "block_size": 512, 00:16:28.360 "num_blocks": 65536, 00:16:28.360 "uuid": "f4c693de-9729-4149-8335-63a55a0f13a9", 00:16:28.360 "assigned_rate_limits": { 00:16:28.360 "rw_ios_per_sec": 0, 00:16:28.360 "rw_mbytes_per_sec": 0, 00:16:28.360 "r_mbytes_per_sec": 0, 00:16:28.360 "w_mbytes_per_sec": 0 00:16:28.360 }, 00:16:28.360 "claimed": false, 00:16:28.360 "zoned": false, 00:16:28.360 "supported_io_types": { 00:16:28.360 "read": true, 00:16:28.360 "write": true, 00:16:28.360 "unmap": false, 00:16:28.360 "flush": false, 00:16:28.360 "reset": true, 00:16:28.360 "nvme_admin": false, 00:16:28.360 "nvme_io": false, 00:16:28.360 "nvme_io_md": false, 00:16:28.360 "write_zeroes": true, 00:16:28.360 "zcopy": false, 00:16:28.360 "get_zone_info": false, 00:16:28.360 "zone_management": false, 00:16:28.360 "zone_append": false, 00:16:28.360 "compare": false, 00:16:28.360 "compare_and_write": false, 00:16:28.360 "abort": false, 00:16:28.360 "seek_hole": false, 00:16:28.360 "seek_data": false, 00:16:28.360 "copy": false, 00:16:28.360 "nvme_iov_md": false 00:16:28.360 }, 00:16:28.360 "memory_domains": [ 00:16:28.360 { 00:16:28.360 "dma_device_id": "system", 00:16:28.360 "dma_device_type": 1 00:16:28.360 }, 00:16:28.360 { 00:16:28.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.360 "dma_device_type": 2 00:16:28.360 }, 00:16:28.360 { 00:16:28.360 "dma_device_id": "system", 00:16:28.360 "dma_device_type": 1 00:16:28.360 }, 00:16:28.360 { 00:16:28.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.360 "dma_device_type": 2 00:16:28.360 }, 00:16:28.360 { 00:16:28.360 "dma_device_id": "system", 00:16:28.360 "dma_device_type": 1 00:16:28.360 }, 00:16:28.360 { 00:16:28.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.360 "dma_device_type": 2 00:16:28.360 } 00:16:28.360 ], 00:16:28.360 "driver_specific": { 00:16:28.360 "raid": { 00:16:28.360 "uuid": "f4c693de-9729-4149-8335-63a55a0f13a9", 00:16:28.360 "strip_size_kb": 0, 00:16:28.360 "state": "online", 00:16:28.360 "raid_level": "raid1", 00:16:28.360 "superblock": false, 00:16:28.360 "num_base_bdevs": 3, 00:16:28.360 "num_base_bdevs_discovered": 3, 00:16:28.360 "num_base_bdevs_operational": 3, 00:16:28.360 "base_bdevs_list": [ 00:16:28.360 { 00:16:28.360 "name": "BaseBdev1", 00:16:28.360 "uuid": "544f528d-2de2-4b51-89f0-a683a3cdfc7a", 00:16:28.360 "is_configured": true, 00:16:28.360 "data_offset": 0, 00:16:28.360 "data_size": 65536 00:16:28.360 }, 00:16:28.360 { 00:16:28.360 "name": "BaseBdev2", 00:16:28.360 "uuid": "4dcf0b43-07ec-46f1-9fd7-0b025aa92313", 00:16:28.360 "is_configured": true, 00:16:28.360 "data_offset": 0, 00:16:28.360 "data_size": 65536 00:16:28.360 }, 00:16:28.360 { 00:16:28.360 "name": "BaseBdev3", 00:16:28.360 "uuid": "b7a23934-28ea-4da7-97d8-a0d9dedec226", 00:16:28.360 "is_configured": true, 00:16:28.360 "data_offset": 0, 00:16:28.360 "data_size": 65536 00:16:28.360 } 00:16:28.360 ] 00:16:28.360 } 00:16:28.360 } 00:16:28.361 }' 00:16:28.361 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.361 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:28.361 BaseBdev2 00:16:28.361 BaseBdev3' 00:16:28.361 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.361 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:28.361 13:16:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:28.620 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:28.620 "name": "BaseBdev1", 00:16:28.620 "aliases": [ 00:16:28.620 "544f528d-2de2-4b51-89f0-a683a3cdfc7a" 00:16:28.620 ], 00:16:28.620 "product_name": "Malloc disk", 00:16:28.620 "block_size": 512, 00:16:28.620 "num_blocks": 65536, 00:16:28.620 "uuid": "544f528d-2de2-4b51-89f0-a683a3cdfc7a", 00:16:28.620 "assigned_rate_limits": { 00:16:28.620 "rw_ios_per_sec": 0, 00:16:28.620 "rw_mbytes_per_sec": 0, 00:16:28.620 "r_mbytes_per_sec": 0, 00:16:28.620 "w_mbytes_per_sec": 0 00:16:28.620 }, 00:16:28.620 "claimed": true, 00:16:28.620 "claim_type": "exclusive_write", 00:16:28.620 "zoned": false, 00:16:28.620 "supported_io_types": { 00:16:28.620 "read": true, 00:16:28.620 "write": true, 00:16:28.620 "unmap": true, 00:16:28.620 "flush": true, 00:16:28.620 "reset": true, 00:16:28.620 "nvme_admin": false, 00:16:28.620 "nvme_io": false, 00:16:28.620 "nvme_io_md": false, 00:16:28.620 "write_zeroes": true, 00:16:28.620 "zcopy": true, 00:16:28.620 "get_zone_info": false, 00:16:28.620 "zone_management": false, 00:16:28.620 "zone_append": false, 00:16:28.620 "compare": false, 00:16:28.620 "compare_and_write": false, 00:16:28.620 "abort": true, 00:16:28.620 "seek_hole": false, 00:16:28.620 "seek_data": false, 00:16:28.620 "copy": true, 00:16:28.620 "nvme_iov_md": false 00:16:28.620 }, 00:16:28.620 "memory_domains": [ 00:16:28.620 { 00:16:28.620 "dma_device_id": "system", 00:16:28.620 "dma_device_type": 1 00:16:28.620 }, 00:16:28.620 { 00:16:28.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.620 "dma_device_type": 2 00:16:28.620 } 00:16:28.620 ], 00:16:28.620 "driver_specific": {} 00:16:28.620 }' 00:16:28.620 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.620 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.879 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:28.879 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.879 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.879 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:28.879 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.879 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.879 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:28.879 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.879 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.138 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.138 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.138 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:29.138 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.138 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.138 "name": "BaseBdev2", 00:16:29.138 "aliases": [ 00:16:29.138 "4dcf0b43-07ec-46f1-9fd7-0b025aa92313" 00:16:29.138 ], 00:16:29.139 "product_name": "Malloc disk", 00:16:29.139 "block_size": 512, 00:16:29.139 "num_blocks": 65536, 00:16:29.139 "uuid": "4dcf0b43-07ec-46f1-9fd7-0b025aa92313", 00:16:29.139 "assigned_rate_limits": { 00:16:29.139 "rw_ios_per_sec": 0, 00:16:29.139 "rw_mbytes_per_sec": 0, 00:16:29.139 "r_mbytes_per_sec": 0, 00:16:29.139 "w_mbytes_per_sec": 0 00:16:29.139 }, 00:16:29.139 "claimed": true, 00:16:29.139 "claim_type": "exclusive_write", 00:16:29.139 "zoned": false, 00:16:29.139 "supported_io_types": { 00:16:29.139 "read": true, 00:16:29.139 "write": true, 00:16:29.139 "unmap": true, 00:16:29.139 "flush": true, 00:16:29.139 "reset": true, 00:16:29.139 "nvme_admin": false, 00:16:29.139 "nvme_io": false, 00:16:29.139 "nvme_io_md": false, 00:16:29.139 "write_zeroes": true, 00:16:29.139 "zcopy": true, 00:16:29.139 "get_zone_info": false, 00:16:29.139 "zone_management": false, 00:16:29.139 "zone_append": false, 00:16:29.139 "compare": false, 00:16:29.139 "compare_and_write": false, 00:16:29.139 "abort": true, 00:16:29.139 "seek_hole": false, 00:16:29.139 "seek_data": false, 00:16:29.139 "copy": true, 00:16:29.139 "nvme_iov_md": false 00:16:29.139 }, 00:16:29.139 "memory_domains": [ 00:16:29.139 { 00:16:29.139 "dma_device_id": "system", 00:16:29.139 "dma_device_type": 1 00:16:29.139 }, 00:16:29.139 { 00:16:29.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.139 "dma_device_type": 2 00:16:29.139 } 00:16:29.139 ], 00:16:29.139 "driver_specific": {} 00:16:29.139 }' 00:16:29.139 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.398 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.398 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.398 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.398 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.398 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.398 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.398 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.398 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.398 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.656 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.657 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.657 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.657 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:29.657 13:16:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.915 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.915 "name": "BaseBdev3", 00:16:29.915 "aliases": [ 00:16:29.915 "b7a23934-28ea-4da7-97d8-a0d9dedec226" 00:16:29.915 ], 00:16:29.915 "product_name": "Malloc disk", 00:16:29.915 "block_size": 512, 00:16:29.915 "num_blocks": 65536, 00:16:29.915 "uuid": "b7a23934-28ea-4da7-97d8-a0d9dedec226", 00:16:29.915 "assigned_rate_limits": { 00:16:29.915 "rw_ios_per_sec": 0, 00:16:29.915 "rw_mbytes_per_sec": 0, 00:16:29.915 "r_mbytes_per_sec": 0, 00:16:29.915 "w_mbytes_per_sec": 0 00:16:29.915 }, 00:16:29.915 "claimed": true, 00:16:29.915 "claim_type": "exclusive_write", 00:16:29.915 "zoned": false, 00:16:29.916 "supported_io_types": { 00:16:29.916 "read": true, 00:16:29.916 "write": true, 00:16:29.916 "unmap": true, 00:16:29.916 "flush": true, 00:16:29.916 "reset": true, 00:16:29.916 "nvme_admin": false, 00:16:29.916 "nvme_io": false, 00:16:29.916 "nvme_io_md": false, 00:16:29.916 "write_zeroes": true, 00:16:29.916 "zcopy": true, 00:16:29.916 "get_zone_info": false, 00:16:29.916 "zone_management": false, 00:16:29.916 "zone_append": false, 00:16:29.916 "compare": false, 00:16:29.916 "compare_and_write": false, 00:16:29.916 "abort": true, 00:16:29.916 "seek_hole": false, 00:16:29.916 "seek_data": false, 00:16:29.916 "copy": true, 00:16:29.916 "nvme_iov_md": false 00:16:29.916 }, 00:16:29.916 "memory_domains": [ 00:16:29.916 { 00:16:29.916 "dma_device_id": "system", 00:16:29.916 "dma_device_type": 1 00:16:29.916 }, 00:16:29.916 { 00:16:29.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.916 "dma_device_type": 2 00:16:29.916 } 00:16:29.916 ], 00:16:29.916 "driver_specific": {} 00:16:29.916 }' 00:16:29.916 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.916 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.916 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.916 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.916 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.916 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.916 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.916 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:30.175 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:30.175 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.175 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:30.175 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:30.175 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:30.434 [2024-07-26 13:16:10.775440] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.434 13:16:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.693 13:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.693 "name": "Existed_Raid", 00:16:30.693 "uuid": "f4c693de-9729-4149-8335-63a55a0f13a9", 00:16:30.693 "strip_size_kb": 0, 00:16:30.693 "state": "online", 00:16:30.693 "raid_level": "raid1", 00:16:30.693 "superblock": false, 00:16:30.693 "num_base_bdevs": 3, 00:16:30.693 "num_base_bdevs_discovered": 2, 00:16:30.693 "num_base_bdevs_operational": 2, 00:16:30.693 "base_bdevs_list": [ 00:16:30.693 { 00:16:30.693 "name": null, 00:16:30.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.693 "is_configured": false, 00:16:30.693 "data_offset": 0, 00:16:30.693 "data_size": 65536 00:16:30.693 }, 00:16:30.693 { 00:16:30.693 "name": "BaseBdev2", 00:16:30.693 "uuid": "4dcf0b43-07ec-46f1-9fd7-0b025aa92313", 00:16:30.693 "is_configured": true, 00:16:30.693 "data_offset": 0, 00:16:30.693 "data_size": 65536 00:16:30.693 }, 00:16:30.693 { 00:16:30.693 "name": "BaseBdev3", 00:16:30.693 "uuid": "b7a23934-28ea-4da7-97d8-a0d9dedec226", 00:16:30.693 "is_configured": true, 00:16:30.693 "data_offset": 0, 00:16:30.693 "data_size": 65536 00:16:30.693 } 00:16:30.693 ] 00:16:30.693 }' 00:16:30.693 13:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.693 13:16:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.262 13:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:31.262 13:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:31.262 13:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:31.262 13:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.521 13:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:31.521 13:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:31.521 13:16:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:31.521 [2024-07-26 13:16:12.043779] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:31.780 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:31.780 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:31.780 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.780 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:31.780 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:31.780 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:31.780 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:32.039 [2024-07-26 13:16:12.511192] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:32.040 [2024-07-26 13:16:12.511257] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.040 [2024-07-26 13:16:12.521156] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.040 [2024-07-26 13:16:12.521185] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.040 [2024-07-26 13:16:12.521195] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1603710 name Existed_Raid, state offline 00:16:32.040 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:32.040 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:32.040 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.040 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:32.299 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:32.299 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:32.299 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:32.299 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:32.299 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:32.299 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:32.559 BaseBdev2 00:16:32.559 13:16:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:32.559 13:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:32.559 13:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:32.559 13:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:32.559 13:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:32.559 13:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:32.559 13:16:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:32.817 13:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:33.076 [ 00:16:33.076 { 00:16:33.076 "name": "BaseBdev2", 00:16:33.076 "aliases": [ 00:16:33.076 "4078a90e-080c-438a-877b-c9ff037de524" 00:16:33.076 ], 00:16:33.076 "product_name": "Malloc disk", 00:16:33.076 "block_size": 512, 00:16:33.076 "num_blocks": 65536, 00:16:33.076 "uuid": "4078a90e-080c-438a-877b-c9ff037de524", 00:16:33.076 "assigned_rate_limits": { 00:16:33.076 "rw_ios_per_sec": 0, 00:16:33.076 "rw_mbytes_per_sec": 0, 00:16:33.076 "r_mbytes_per_sec": 0, 00:16:33.076 "w_mbytes_per_sec": 0 00:16:33.076 }, 00:16:33.076 "claimed": false, 00:16:33.076 "zoned": false, 00:16:33.076 "supported_io_types": { 00:16:33.076 "read": true, 00:16:33.076 "write": true, 00:16:33.077 "unmap": true, 00:16:33.077 "flush": true, 00:16:33.077 "reset": true, 00:16:33.077 "nvme_admin": false, 00:16:33.077 "nvme_io": false, 00:16:33.077 "nvme_io_md": false, 00:16:33.077 "write_zeroes": true, 00:16:33.077 "zcopy": true, 00:16:33.077 "get_zone_info": false, 00:16:33.077 "zone_management": false, 00:16:33.077 "zone_append": false, 00:16:33.077 "compare": false, 00:16:33.077 "compare_and_write": false, 00:16:33.077 "abort": true, 00:16:33.077 "seek_hole": false, 00:16:33.077 "seek_data": false, 00:16:33.077 "copy": true, 00:16:33.077 "nvme_iov_md": false 00:16:33.077 }, 00:16:33.077 "memory_domains": [ 00:16:33.077 { 00:16:33.077 "dma_device_id": "system", 00:16:33.077 "dma_device_type": 1 00:16:33.077 }, 00:16:33.077 { 00:16:33.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.077 "dma_device_type": 2 00:16:33.077 } 00:16:33.077 ], 00:16:33.077 "driver_specific": {} 00:16:33.077 } 00:16:33.077 ] 00:16:33.077 13:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:33.077 13:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:33.077 13:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:33.077 13:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:33.336 BaseBdev3 00:16:33.336 13:16:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:33.336 13:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:33.336 13:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:33.336 13:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:33.336 13:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:33.336 13:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:33.336 13:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.595 13:16:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:33.595 [ 00:16:33.595 { 00:16:33.595 "name": "BaseBdev3", 00:16:33.595 "aliases": [ 00:16:33.595 "b2f7afe9-9a61-495a-9d08-8141abf4b181" 00:16:33.595 ], 00:16:33.595 "product_name": "Malloc disk", 00:16:33.595 "block_size": 512, 00:16:33.595 "num_blocks": 65536, 00:16:33.595 "uuid": "b2f7afe9-9a61-495a-9d08-8141abf4b181", 00:16:33.595 "assigned_rate_limits": { 00:16:33.595 "rw_ios_per_sec": 0, 00:16:33.595 "rw_mbytes_per_sec": 0, 00:16:33.595 "r_mbytes_per_sec": 0, 00:16:33.595 "w_mbytes_per_sec": 0 00:16:33.595 }, 00:16:33.595 "claimed": false, 00:16:33.595 "zoned": false, 00:16:33.595 "supported_io_types": { 00:16:33.595 "read": true, 00:16:33.595 "write": true, 00:16:33.595 "unmap": true, 00:16:33.595 "flush": true, 00:16:33.595 "reset": true, 00:16:33.595 "nvme_admin": false, 00:16:33.595 "nvme_io": false, 00:16:33.595 "nvme_io_md": false, 00:16:33.595 "write_zeroes": true, 00:16:33.595 "zcopy": true, 00:16:33.595 "get_zone_info": false, 00:16:33.595 "zone_management": false, 00:16:33.595 "zone_append": false, 00:16:33.595 "compare": false, 00:16:33.595 "compare_and_write": false, 00:16:33.595 "abort": true, 00:16:33.595 "seek_hole": false, 00:16:33.595 "seek_data": false, 00:16:33.595 "copy": true, 00:16:33.595 "nvme_iov_md": false 00:16:33.595 }, 00:16:33.595 "memory_domains": [ 00:16:33.595 { 00:16:33.595 "dma_device_id": "system", 00:16:33.595 "dma_device_type": 1 00:16:33.595 }, 00:16:33.595 { 00:16:33.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.595 "dma_device_type": 2 00:16:33.595 } 00:16:33.595 ], 00:16:33.595 "driver_specific": {} 00:16:33.595 } 00:16:33.595 ] 00:16:33.595 13:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:33.595 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:33.854 [2024-07-26 13:16:14.330273] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:33.854 [2024-07-26 13:16:14.330308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:33.854 [2024-07-26 13:16:14.330326] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:33.854 [2024-07-26 13:16:14.331558] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.854 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.114 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.114 "name": "Existed_Raid", 00:16:34.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.114 "strip_size_kb": 0, 00:16:34.114 "state": "configuring", 00:16:34.114 "raid_level": "raid1", 00:16:34.114 "superblock": false, 00:16:34.114 "num_base_bdevs": 3, 00:16:34.114 "num_base_bdevs_discovered": 2, 00:16:34.114 "num_base_bdevs_operational": 3, 00:16:34.114 "base_bdevs_list": [ 00:16:34.114 { 00:16:34.114 "name": "BaseBdev1", 00:16:34.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.114 "is_configured": false, 00:16:34.114 "data_offset": 0, 00:16:34.114 "data_size": 0 00:16:34.114 }, 00:16:34.114 { 00:16:34.114 "name": "BaseBdev2", 00:16:34.114 "uuid": "4078a90e-080c-438a-877b-c9ff037de524", 00:16:34.114 "is_configured": true, 00:16:34.114 "data_offset": 0, 00:16:34.114 "data_size": 65536 00:16:34.114 }, 00:16:34.114 { 00:16:34.114 "name": "BaseBdev3", 00:16:34.114 "uuid": "b2f7afe9-9a61-495a-9d08-8141abf4b181", 00:16:34.114 "is_configured": true, 00:16:34.114 "data_offset": 0, 00:16:34.114 "data_size": 65536 00:16:34.114 } 00:16:34.114 ] 00:16:34.114 }' 00:16:34.114 13:16:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.114 13:16:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.683 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:34.950 [2024-07-26 13:16:15.364972] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:34.950 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:34.950 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.950 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.950 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:34.950 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:34.950 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:34.950 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.950 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.950 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.950 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.950 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.950 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.209 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.209 "name": "Existed_Raid", 00:16:35.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.209 "strip_size_kb": 0, 00:16:35.209 "state": "configuring", 00:16:35.209 "raid_level": "raid1", 00:16:35.209 "superblock": false, 00:16:35.209 "num_base_bdevs": 3, 00:16:35.209 "num_base_bdevs_discovered": 1, 00:16:35.209 "num_base_bdevs_operational": 3, 00:16:35.209 "base_bdevs_list": [ 00:16:35.209 { 00:16:35.209 "name": "BaseBdev1", 00:16:35.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.209 "is_configured": false, 00:16:35.209 "data_offset": 0, 00:16:35.209 "data_size": 0 00:16:35.209 }, 00:16:35.209 { 00:16:35.209 "name": null, 00:16:35.209 "uuid": "4078a90e-080c-438a-877b-c9ff037de524", 00:16:35.209 "is_configured": false, 00:16:35.209 "data_offset": 0, 00:16:35.209 "data_size": 65536 00:16:35.209 }, 00:16:35.209 { 00:16:35.209 "name": "BaseBdev3", 00:16:35.209 "uuid": "b2f7afe9-9a61-495a-9d08-8141abf4b181", 00:16:35.209 "is_configured": true, 00:16:35.209 "data_offset": 0, 00:16:35.209 "data_size": 65536 00:16:35.209 } 00:16:35.209 ] 00:16:35.209 }' 00:16:35.209 13:16:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.209 13:16:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.777 13:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.777 13:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:36.050 13:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:36.050 13:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:36.320 [2024-07-26 13:16:16.595324] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.321 BaseBdev1 00:16:36.321 13:16:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:36.321 13:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:36.321 13:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:36.321 13:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:36.321 13:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:36.321 13:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:36.321 13:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.579 13:16:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.579 [ 00:16:36.579 { 00:16:36.579 "name": "BaseBdev1", 00:16:36.579 "aliases": [ 00:16:36.579 "e60ec98f-f5b4-4864-9c53-21d2aa88b2af" 00:16:36.579 ], 00:16:36.579 "product_name": "Malloc disk", 00:16:36.579 "block_size": 512, 00:16:36.579 "num_blocks": 65536, 00:16:36.579 "uuid": "e60ec98f-f5b4-4864-9c53-21d2aa88b2af", 00:16:36.579 "assigned_rate_limits": { 00:16:36.579 "rw_ios_per_sec": 0, 00:16:36.579 "rw_mbytes_per_sec": 0, 00:16:36.579 "r_mbytes_per_sec": 0, 00:16:36.579 "w_mbytes_per_sec": 0 00:16:36.579 }, 00:16:36.579 "claimed": true, 00:16:36.579 "claim_type": "exclusive_write", 00:16:36.579 "zoned": false, 00:16:36.579 "supported_io_types": { 00:16:36.579 "read": true, 00:16:36.579 "write": true, 00:16:36.579 "unmap": true, 00:16:36.579 "flush": true, 00:16:36.579 "reset": true, 00:16:36.579 "nvme_admin": false, 00:16:36.579 "nvme_io": false, 00:16:36.579 "nvme_io_md": false, 00:16:36.579 "write_zeroes": true, 00:16:36.579 "zcopy": true, 00:16:36.579 "get_zone_info": false, 00:16:36.579 "zone_management": false, 00:16:36.579 "zone_append": false, 00:16:36.579 "compare": false, 00:16:36.579 "compare_and_write": false, 00:16:36.579 "abort": true, 00:16:36.579 "seek_hole": false, 00:16:36.579 "seek_data": false, 00:16:36.579 "copy": true, 00:16:36.579 "nvme_iov_md": false 00:16:36.579 }, 00:16:36.579 "memory_domains": [ 00:16:36.579 { 00:16:36.579 "dma_device_id": "system", 00:16:36.579 "dma_device_type": 1 00:16:36.579 }, 00:16:36.579 { 00:16:36.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.579 "dma_device_type": 2 00:16:36.579 } 00:16:36.579 ], 00:16:36.579 "driver_specific": {} 00:16:36.579 } 00:16:36.579 ] 00:16:36.579 13:16:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:36.579 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:36.579 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:36.579 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:36.579 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:36.579 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:36.579 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:36.579 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.579 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.579 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.579 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.580 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.580 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.838 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.839 "name": "Existed_Raid", 00:16:36.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.839 "strip_size_kb": 0, 00:16:36.839 "state": "configuring", 00:16:36.839 "raid_level": "raid1", 00:16:36.839 "superblock": false, 00:16:36.839 "num_base_bdevs": 3, 00:16:36.839 "num_base_bdevs_discovered": 2, 00:16:36.839 "num_base_bdevs_operational": 3, 00:16:36.839 "base_bdevs_list": [ 00:16:36.839 { 00:16:36.839 "name": "BaseBdev1", 00:16:36.839 "uuid": "e60ec98f-f5b4-4864-9c53-21d2aa88b2af", 00:16:36.839 "is_configured": true, 00:16:36.839 "data_offset": 0, 00:16:36.839 "data_size": 65536 00:16:36.839 }, 00:16:36.839 { 00:16:36.839 "name": null, 00:16:36.839 "uuid": "4078a90e-080c-438a-877b-c9ff037de524", 00:16:36.839 "is_configured": false, 00:16:36.839 "data_offset": 0, 00:16:36.839 "data_size": 65536 00:16:36.839 }, 00:16:36.839 { 00:16:36.839 "name": "BaseBdev3", 00:16:36.839 "uuid": "b2f7afe9-9a61-495a-9d08-8141abf4b181", 00:16:36.839 "is_configured": true, 00:16:36.839 "data_offset": 0, 00:16:36.839 "data_size": 65536 00:16:36.839 } 00:16:36.839 ] 00:16:36.839 }' 00:16:36.839 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.839 13:16:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.406 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.406 13:16:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:37.665 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:37.665 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:37.924 [2024-07-26 13:16:18.343948] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:37.924 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:37.924 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.924 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:37.924 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:37.924 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:37.924 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:37.924 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.924 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.924 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.924 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.924 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.924 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.183 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.183 "name": "Existed_Raid", 00:16:38.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.183 "strip_size_kb": 0, 00:16:38.183 "state": "configuring", 00:16:38.183 "raid_level": "raid1", 00:16:38.183 "superblock": false, 00:16:38.183 "num_base_bdevs": 3, 00:16:38.183 "num_base_bdevs_discovered": 1, 00:16:38.183 "num_base_bdevs_operational": 3, 00:16:38.183 "base_bdevs_list": [ 00:16:38.183 { 00:16:38.183 "name": "BaseBdev1", 00:16:38.183 "uuid": "e60ec98f-f5b4-4864-9c53-21d2aa88b2af", 00:16:38.183 "is_configured": true, 00:16:38.183 "data_offset": 0, 00:16:38.183 "data_size": 65536 00:16:38.183 }, 00:16:38.183 { 00:16:38.183 "name": null, 00:16:38.183 "uuid": "4078a90e-080c-438a-877b-c9ff037de524", 00:16:38.183 "is_configured": false, 00:16:38.183 "data_offset": 0, 00:16:38.183 "data_size": 65536 00:16:38.183 }, 00:16:38.183 { 00:16:38.183 "name": null, 00:16:38.183 "uuid": "b2f7afe9-9a61-495a-9d08-8141abf4b181", 00:16:38.183 "is_configured": false, 00:16:38.183 "data_offset": 0, 00:16:38.183 "data_size": 65536 00:16:38.183 } 00:16:38.183 ] 00:16:38.183 }' 00:16:38.183 13:16:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.183 13:16:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.751 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.752 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:39.011 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:39.011 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:39.270 [2024-07-26 13:16:19.623339] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:39.270 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:39.270 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:39.270 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:39.270 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:39.270 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:39.270 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:39.270 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.270 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.270 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.271 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.271 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.271 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.530 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.530 "name": "Existed_Raid", 00:16:39.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.530 "strip_size_kb": 0, 00:16:39.530 "state": "configuring", 00:16:39.530 "raid_level": "raid1", 00:16:39.530 "superblock": false, 00:16:39.530 "num_base_bdevs": 3, 00:16:39.530 "num_base_bdevs_discovered": 2, 00:16:39.530 "num_base_bdevs_operational": 3, 00:16:39.530 "base_bdevs_list": [ 00:16:39.530 { 00:16:39.530 "name": "BaseBdev1", 00:16:39.530 "uuid": "e60ec98f-f5b4-4864-9c53-21d2aa88b2af", 00:16:39.530 "is_configured": true, 00:16:39.530 "data_offset": 0, 00:16:39.530 "data_size": 65536 00:16:39.530 }, 00:16:39.530 { 00:16:39.530 "name": null, 00:16:39.530 "uuid": "4078a90e-080c-438a-877b-c9ff037de524", 00:16:39.530 "is_configured": false, 00:16:39.530 "data_offset": 0, 00:16:39.530 "data_size": 65536 00:16:39.530 }, 00:16:39.530 { 00:16:39.530 "name": "BaseBdev3", 00:16:39.530 "uuid": "b2f7afe9-9a61-495a-9d08-8141abf4b181", 00:16:39.530 "is_configured": true, 00:16:39.530 "data_offset": 0, 00:16:39.530 "data_size": 65536 00:16:39.530 } 00:16:39.530 ] 00:16:39.530 }' 00:16:39.530 13:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.530 13:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.098 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:40.098 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.358 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:40.358 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:40.617 [2024-07-26 13:16:20.902731] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:40.617 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:40.617 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.617 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.617 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:40.617 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:40.617 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:40.617 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.617 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.617 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.617 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.617 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.617 13:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.876 13:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.876 "name": "Existed_Raid", 00:16:40.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.876 "strip_size_kb": 0, 00:16:40.876 "state": "configuring", 00:16:40.876 "raid_level": "raid1", 00:16:40.876 "superblock": false, 00:16:40.876 "num_base_bdevs": 3, 00:16:40.876 "num_base_bdevs_discovered": 1, 00:16:40.876 "num_base_bdevs_operational": 3, 00:16:40.876 "base_bdevs_list": [ 00:16:40.876 { 00:16:40.876 "name": null, 00:16:40.877 "uuid": "e60ec98f-f5b4-4864-9c53-21d2aa88b2af", 00:16:40.877 "is_configured": false, 00:16:40.877 "data_offset": 0, 00:16:40.877 "data_size": 65536 00:16:40.877 }, 00:16:40.877 { 00:16:40.877 "name": null, 00:16:40.877 "uuid": "4078a90e-080c-438a-877b-c9ff037de524", 00:16:40.877 "is_configured": false, 00:16:40.877 "data_offset": 0, 00:16:40.877 "data_size": 65536 00:16:40.877 }, 00:16:40.877 { 00:16:40.877 "name": "BaseBdev3", 00:16:40.877 "uuid": "b2f7afe9-9a61-495a-9d08-8141abf4b181", 00:16:40.877 "is_configured": true, 00:16:40.877 "data_offset": 0, 00:16:40.877 "data_size": 65536 00:16:40.877 } 00:16:40.877 ] 00:16:40.877 }' 00:16:40.877 13:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.877 13:16:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.445 13:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.445 13:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:41.445 13:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:41.445 13:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:41.705 [2024-07-26 13:16:22.176392] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.705 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:41.705 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.705 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:41.705 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:41.705 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:41.705 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:41.705 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.705 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.705 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.705 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.705 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.705 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.966 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.966 "name": "Existed_Raid", 00:16:41.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.966 "strip_size_kb": 0, 00:16:41.966 "state": "configuring", 00:16:41.966 "raid_level": "raid1", 00:16:41.966 "superblock": false, 00:16:41.966 "num_base_bdevs": 3, 00:16:41.966 "num_base_bdevs_discovered": 2, 00:16:41.966 "num_base_bdevs_operational": 3, 00:16:41.966 "base_bdevs_list": [ 00:16:41.966 { 00:16:41.966 "name": null, 00:16:41.966 "uuid": "e60ec98f-f5b4-4864-9c53-21d2aa88b2af", 00:16:41.966 "is_configured": false, 00:16:41.966 "data_offset": 0, 00:16:41.966 "data_size": 65536 00:16:41.966 }, 00:16:41.966 { 00:16:41.966 "name": "BaseBdev2", 00:16:41.966 "uuid": "4078a90e-080c-438a-877b-c9ff037de524", 00:16:41.966 "is_configured": true, 00:16:41.966 "data_offset": 0, 00:16:41.966 "data_size": 65536 00:16:41.966 }, 00:16:41.966 { 00:16:41.966 "name": "BaseBdev3", 00:16:41.966 "uuid": "b2f7afe9-9a61-495a-9d08-8141abf4b181", 00:16:41.966 "is_configured": true, 00:16:41.966 "data_offset": 0, 00:16:41.966 "data_size": 65536 00:16:41.966 } 00:16:41.966 ] 00:16:41.966 }' 00:16:41.966 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.966 13:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.534 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.535 13:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:42.794 13:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:42.794 13:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:42.794 13:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.053 13:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u e60ec98f-f5b4-4864-9c53-21d2aa88b2af 00:16:43.312 [2024-07-26 13:16:23.671616] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:43.312 [2024-07-26 13:16:23.671653] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x15fb0e0 00:16:43.312 [2024-07-26 13:16:23.671661] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:43.312 [2024-07-26 13:16:23.671837] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17b48e0 00:16:43.312 [2024-07-26 13:16:23.671950] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15fb0e0 00:16:43.312 [2024-07-26 13:16:23.671959] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15fb0e0 00:16:43.312 [2024-07-26 13:16:23.672104] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.312 NewBaseBdev 00:16:43.312 13:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:43.312 13:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:43.312 13:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:43.312 13:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:43.312 13:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:43.312 13:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:43.312 13:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:43.572 13:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:43.831 [ 00:16:43.831 { 00:16:43.831 "name": "NewBaseBdev", 00:16:43.831 "aliases": [ 00:16:43.831 "e60ec98f-f5b4-4864-9c53-21d2aa88b2af" 00:16:43.831 ], 00:16:43.831 "product_name": "Malloc disk", 00:16:43.831 "block_size": 512, 00:16:43.831 "num_blocks": 65536, 00:16:43.831 "uuid": "e60ec98f-f5b4-4864-9c53-21d2aa88b2af", 00:16:43.831 "assigned_rate_limits": { 00:16:43.831 "rw_ios_per_sec": 0, 00:16:43.831 "rw_mbytes_per_sec": 0, 00:16:43.831 "r_mbytes_per_sec": 0, 00:16:43.831 "w_mbytes_per_sec": 0 00:16:43.831 }, 00:16:43.831 "claimed": true, 00:16:43.831 "claim_type": "exclusive_write", 00:16:43.831 "zoned": false, 00:16:43.831 "supported_io_types": { 00:16:43.831 "read": true, 00:16:43.831 "write": true, 00:16:43.831 "unmap": true, 00:16:43.831 "flush": true, 00:16:43.831 "reset": true, 00:16:43.831 "nvme_admin": false, 00:16:43.831 "nvme_io": false, 00:16:43.831 "nvme_io_md": false, 00:16:43.831 "write_zeroes": true, 00:16:43.831 "zcopy": true, 00:16:43.831 "get_zone_info": false, 00:16:43.831 "zone_management": false, 00:16:43.831 "zone_append": false, 00:16:43.831 "compare": false, 00:16:43.831 "compare_and_write": false, 00:16:43.831 "abort": true, 00:16:43.831 "seek_hole": false, 00:16:43.831 "seek_data": false, 00:16:43.831 "copy": true, 00:16:43.831 "nvme_iov_md": false 00:16:43.831 }, 00:16:43.831 "memory_domains": [ 00:16:43.831 { 00:16:43.831 "dma_device_id": "system", 00:16:43.831 "dma_device_type": 1 00:16:43.831 }, 00:16:43.831 { 00:16:43.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.831 "dma_device_type": 2 00:16:43.831 } 00:16:43.831 ], 00:16:43.831 "driver_specific": {} 00:16:43.831 } 00:16:43.831 ] 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.831 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.091 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.091 "name": "Existed_Raid", 00:16:44.091 "uuid": "72e79068-d026-4512-abad-04442f701f4f", 00:16:44.091 "strip_size_kb": 0, 00:16:44.091 "state": "online", 00:16:44.091 "raid_level": "raid1", 00:16:44.091 "superblock": false, 00:16:44.091 "num_base_bdevs": 3, 00:16:44.091 "num_base_bdevs_discovered": 3, 00:16:44.091 "num_base_bdevs_operational": 3, 00:16:44.091 "base_bdevs_list": [ 00:16:44.091 { 00:16:44.091 "name": "NewBaseBdev", 00:16:44.091 "uuid": "e60ec98f-f5b4-4864-9c53-21d2aa88b2af", 00:16:44.091 "is_configured": true, 00:16:44.091 "data_offset": 0, 00:16:44.091 "data_size": 65536 00:16:44.091 }, 00:16:44.091 { 00:16:44.091 "name": "BaseBdev2", 00:16:44.091 "uuid": "4078a90e-080c-438a-877b-c9ff037de524", 00:16:44.091 "is_configured": true, 00:16:44.091 "data_offset": 0, 00:16:44.091 "data_size": 65536 00:16:44.091 }, 00:16:44.091 { 00:16:44.091 "name": "BaseBdev3", 00:16:44.091 "uuid": "b2f7afe9-9a61-495a-9d08-8141abf4b181", 00:16:44.091 "is_configured": true, 00:16:44.091 "data_offset": 0, 00:16:44.091 "data_size": 65536 00:16:44.091 } 00:16:44.091 ] 00:16:44.091 }' 00:16:44.091 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.091 13:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.659 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:44.659 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:44.659 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:44.659 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:44.659 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:44.659 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:44.659 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:44.659 13:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:44.659 [2024-07-26 13:16:25.151948] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.659 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:44.659 "name": "Existed_Raid", 00:16:44.659 "aliases": [ 00:16:44.659 "72e79068-d026-4512-abad-04442f701f4f" 00:16:44.659 ], 00:16:44.659 "product_name": "Raid Volume", 00:16:44.659 "block_size": 512, 00:16:44.659 "num_blocks": 65536, 00:16:44.659 "uuid": "72e79068-d026-4512-abad-04442f701f4f", 00:16:44.659 "assigned_rate_limits": { 00:16:44.659 "rw_ios_per_sec": 0, 00:16:44.659 "rw_mbytes_per_sec": 0, 00:16:44.659 "r_mbytes_per_sec": 0, 00:16:44.659 "w_mbytes_per_sec": 0 00:16:44.659 }, 00:16:44.659 "claimed": false, 00:16:44.659 "zoned": false, 00:16:44.659 "supported_io_types": { 00:16:44.659 "read": true, 00:16:44.659 "write": true, 00:16:44.659 "unmap": false, 00:16:44.659 "flush": false, 00:16:44.659 "reset": true, 00:16:44.659 "nvme_admin": false, 00:16:44.659 "nvme_io": false, 00:16:44.659 "nvme_io_md": false, 00:16:44.659 "write_zeroes": true, 00:16:44.659 "zcopy": false, 00:16:44.659 "get_zone_info": false, 00:16:44.659 "zone_management": false, 00:16:44.659 "zone_append": false, 00:16:44.659 "compare": false, 00:16:44.659 "compare_and_write": false, 00:16:44.659 "abort": false, 00:16:44.659 "seek_hole": false, 00:16:44.659 "seek_data": false, 00:16:44.659 "copy": false, 00:16:44.659 "nvme_iov_md": false 00:16:44.659 }, 00:16:44.659 "memory_domains": [ 00:16:44.659 { 00:16:44.659 "dma_device_id": "system", 00:16:44.659 "dma_device_type": 1 00:16:44.659 }, 00:16:44.659 { 00:16:44.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.659 "dma_device_type": 2 00:16:44.659 }, 00:16:44.659 { 00:16:44.659 "dma_device_id": "system", 00:16:44.659 "dma_device_type": 1 00:16:44.659 }, 00:16:44.659 { 00:16:44.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.659 "dma_device_type": 2 00:16:44.659 }, 00:16:44.659 { 00:16:44.659 "dma_device_id": "system", 00:16:44.659 "dma_device_type": 1 00:16:44.659 }, 00:16:44.659 { 00:16:44.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.659 "dma_device_type": 2 00:16:44.659 } 00:16:44.659 ], 00:16:44.659 "driver_specific": { 00:16:44.659 "raid": { 00:16:44.659 "uuid": "72e79068-d026-4512-abad-04442f701f4f", 00:16:44.659 "strip_size_kb": 0, 00:16:44.659 "state": "online", 00:16:44.659 "raid_level": "raid1", 00:16:44.659 "superblock": false, 00:16:44.659 "num_base_bdevs": 3, 00:16:44.659 "num_base_bdevs_discovered": 3, 00:16:44.659 "num_base_bdevs_operational": 3, 00:16:44.659 "base_bdevs_list": [ 00:16:44.659 { 00:16:44.659 "name": "NewBaseBdev", 00:16:44.660 "uuid": "e60ec98f-f5b4-4864-9c53-21d2aa88b2af", 00:16:44.660 "is_configured": true, 00:16:44.660 "data_offset": 0, 00:16:44.660 "data_size": 65536 00:16:44.660 }, 00:16:44.660 { 00:16:44.660 "name": "BaseBdev2", 00:16:44.660 "uuid": "4078a90e-080c-438a-877b-c9ff037de524", 00:16:44.660 "is_configured": true, 00:16:44.660 "data_offset": 0, 00:16:44.660 "data_size": 65536 00:16:44.660 }, 00:16:44.660 { 00:16:44.660 "name": "BaseBdev3", 00:16:44.660 "uuid": "b2f7afe9-9a61-495a-9d08-8141abf4b181", 00:16:44.660 "is_configured": true, 00:16:44.660 "data_offset": 0, 00:16:44.660 "data_size": 65536 00:16:44.660 } 00:16:44.660 ] 00:16:44.660 } 00:16:44.660 } 00:16:44.660 }' 00:16:44.660 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.919 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:44.919 BaseBdev2 00:16:44.919 BaseBdev3' 00:16:44.919 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:44.919 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:44.919 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:44.919 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:44.919 "name": "NewBaseBdev", 00:16:44.919 "aliases": [ 00:16:44.919 "e60ec98f-f5b4-4864-9c53-21d2aa88b2af" 00:16:44.919 ], 00:16:44.919 "product_name": "Malloc disk", 00:16:44.919 "block_size": 512, 00:16:44.919 "num_blocks": 65536, 00:16:44.919 "uuid": "e60ec98f-f5b4-4864-9c53-21d2aa88b2af", 00:16:44.919 "assigned_rate_limits": { 00:16:44.919 "rw_ios_per_sec": 0, 00:16:44.919 "rw_mbytes_per_sec": 0, 00:16:44.919 "r_mbytes_per_sec": 0, 00:16:44.919 "w_mbytes_per_sec": 0 00:16:44.919 }, 00:16:44.919 "claimed": true, 00:16:44.919 "claim_type": "exclusive_write", 00:16:44.919 "zoned": false, 00:16:44.919 "supported_io_types": { 00:16:44.919 "read": true, 00:16:44.919 "write": true, 00:16:44.919 "unmap": true, 00:16:44.919 "flush": true, 00:16:44.919 "reset": true, 00:16:44.919 "nvme_admin": false, 00:16:44.919 "nvme_io": false, 00:16:44.919 "nvme_io_md": false, 00:16:44.919 "write_zeroes": true, 00:16:44.919 "zcopy": true, 00:16:44.919 "get_zone_info": false, 00:16:44.919 "zone_management": false, 00:16:44.919 "zone_append": false, 00:16:44.919 "compare": false, 00:16:44.919 "compare_and_write": false, 00:16:44.919 "abort": true, 00:16:44.919 "seek_hole": false, 00:16:44.919 "seek_data": false, 00:16:44.919 "copy": true, 00:16:44.919 "nvme_iov_md": false 00:16:44.919 }, 00:16:44.919 "memory_domains": [ 00:16:44.919 { 00:16:44.919 "dma_device_id": "system", 00:16:44.919 "dma_device_type": 1 00:16:44.919 }, 00:16:44.919 { 00:16:44.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.919 "dma_device_type": 2 00:16:44.919 } 00:16:44.919 ], 00:16:44.919 "driver_specific": {} 00:16:44.919 }' 00:16:44.919 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.178 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.178 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:45.178 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.178 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.178 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:45.178 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.178 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.178 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:45.178 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.437 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.437 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:45.437 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:45.437 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:45.437 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:45.696 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:45.696 "name": "BaseBdev2", 00:16:45.696 "aliases": [ 00:16:45.696 "4078a90e-080c-438a-877b-c9ff037de524" 00:16:45.696 ], 00:16:45.696 "product_name": "Malloc disk", 00:16:45.696 "block_size": 512, 00:16:45.696 "num_blocks": 65536, 00:16:45.696 "uuid": "4078a90e-080c-438a-877b-c9ff037de524", 00:16:45.696 "assigned_rate_limits": { 00:16:45.696 "rw_ios_per_sec": 0, 00:16:45.696 "rw_mbytes_per_sec": 0, 00:16:45.696 "r_mbytes_per_sec": 0, 00:16:45.696 "w_mbytes_per_sec": 0 00:16:45.696 }, 00:16:45.696 "claimed": true, 00:16:45.696 "claim_type": "exclusive_write", 00:16:45.696 "zoned": false, 00:16:45.696 "supported_io_types": { 00:16:45.696 "read": true, 00:16:45.696 "write": true, 00:16:45.696 "unmap": true, 00:16:45.696 "flush": true, 00:16:45.696 "reset": true, 00:16:45.696 "nvme_admin": false, 00:16:45.696 "nvme_io": false, 00:16:45.696 "nvme_io_md": false, 00:16:45.696 "write_zeroes": true, 00:16:45.696 "zcopy": true, 00:16:45.696 "get_zone_info": false, 00:16:45.696 "zone_management": false, 00:16:45.696 "zone_append": false, 00:16:45.696 "compare": false, 00:16:45.696 "compare_and_write": false, 00:16:45.696 "abort": true, 00:16:45.696 "seek_hole": false, 00:16:45.696 "seek_data": false, 00:16:45.696 "copy": true, 00:16:45.696 "nvme_iov_md": false 00:16:45.696 }, 00:16:45.696 "memory_domains": [ 00:16:45.696 { 00:16:45.696 "dma_device_id": "system", 00:16:45.696 "dma_device_type": 1 00:16:45.696 }, 00:16:45.696 { 00:16:45.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.696 "dma_device_type": 2 00:16:45.696 } 00:16:45.696 ], 00:16:45.696 "driver_specific": {} 00:16:45.696 }' 00:16:45.696 13:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.696 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:45.697 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:45.697 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.697 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:45.697 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:45.697 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.697 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:45.956 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:45.956 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.956 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:45.956 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:45.956 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:45.956 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:45.956 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:46.215 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:46.215 "name": "BaseBdev3", 00:16:46.215 "aliases": [ 00:16:46.215 "b2f7afe9-9a61-495a-9d08-8141abf4b181" 00:16:46.215 ], 00:16:46.215 "product_name": "Malloc disk", 00:16:46.215 "block_size": 512, 00:16:46.215 "num_blocks": 65536, 00:16:46.215 "uuid": "b2f7afe9-9a61-495a-9d08-8141abf4b181", 00:16:46.215 "assigned_rate_limits": { 00:16:46.215 "rw_ios_per_sec": 0, 00:16:46.215 "rw_mbytes_per_sec": 0, 00:16:46.215 "r_mbytes_per_sec": 0, 00:16:46.215 "w_mbytes_per_sec": 0 00:16:46.215 }, 00:16:46.215 "claimed": true, 00:16:46.215 "claim_type": "exclusive_write", 00:16:46.215 "zoned": false, 00:16:46.215 "supported_io_types": { 00:16:46.215 "read": true, 00:16:46.215 "write": true, 00:16:46.215 "unmap": true, 00:16:46.215 "flush": true, 00:16:46.215 "reset": true, 00:16:46.215 "nvme_admin": false, 00:16:46.215 "nvme_io": false, 00:16:46.215 "nvme_io_md": false, 00:16:46.215 "write_zeroes": true, 00:16:46.215 "zcopy": true, 00:16:46.215 "get_zone_info": false, 00:16:46.215 "zone_management": false, 00:16:46.215 "zone_append": false, 00:16:46.215 "compare": false, 00:16:46.215 "compare_and_write": false, 00:16:46.215 "abort": true, 00:16:46.215 "seek_hole": false, 00:16:46.215 "seek_data": false, 00:16:46.215 "copy": true, 00:16:46.215 "nvme_iov_md": false 00:16:46.215 }, 00:16:46.215 "memory_domains": [ 00:16:46.215 { 00:16:46.215 "dma_device_id": "system", 00:16:46.215 "dma_device_type": 1 00:16:46.215 }, 00:16:46.215 { 00:16:46.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.215 "dma_device_type": 2 00:16:46.215 } 00:16:46.215 ], 00:16:46.215 "driver_specific": {} 00:16:46.215 }' 00:16:46.215 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.215 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:46.215 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:46.215 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.215 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:46.215 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:46.215 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.475 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:46.475 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:46.475 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.475 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:46.475 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:46.475 13:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:46.734 [2024-07-26 13:16:27.076769] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.734 [2024-07-26 13:16:27.076792] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.734 [2024-07-26 13:16:27.076845] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.734 [2024-07-26 13:16:27.077096] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.734 [2024-07-26 13:16:27.077108] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15fb0e0 name Existed_Raid, state offline 00:16:46.734 13:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 710345 00:16:46.734 13:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 710345 ']' 00:16:46.734 13:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 710345 00:16:46.734 13:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:46.734 13:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.734 13:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 710345 00:16:46.734 13:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:46.734 13:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:46.734 13:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 710345' 00:16:46.734 killing process with pid 710345 00:16:46.734 13:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 710345 00:16:46.734 [2024-07-26 13:16:27.146893] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.734 13:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 710345 00:16:46.734 [2024-07-26 13:16:27.171585] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.994 13:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:46.994 00:16:46.994 real 0m26.870s 00:16:46.994 user 0m49.651s 00:16:46.994 sys 0m4.947s 00:16:46.994 13:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:46.994 13:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.994 ************************************ 00:16:46.995 END TEST raid_state_function_test 00:16:46.995 ************************************ 00:16:46.995 13:16:27 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:46.995 13:16:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:46.995 13:16:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:46.995 13:16:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.995 ************************************ 00:16:46.995 START TEST raid_state_function_test_sb 00:16:46.995 ************************************ 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=715436 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 715436' 00:16:46.995 Process raid pid: 715436 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 715436 /var/tmp/spdk-raid.sock 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 715436 ']' 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:46.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.995 13:16:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.995 [2024-07-26 13:16:27.511311] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:16:46.995 [2024-07-26 13:16:27.511367] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:47.255 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:47.255 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:47.255 [2024-07-26 13:16:27.642981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.255 [2024-07-26 13:16:27.729401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.515 [2024-07-26 13:16:27.788340] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.515 [2024-07-26 13:16:27.788373] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.083 13:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.083 13:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:48.083 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:48.343 [2024-07-26 13:16:28.616172] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:48.343 [2024-07-26 13:16:28.616210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:48.343 [2024-07-26 13:16:28.616221] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.343 [2024-07-26 13:16:28.616232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.343 [2024-07-26 13:16:28.616240] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.343 [2024-07-26 13:16:28.616250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.343 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.343 "name": "Existed_Raid", 00:16:48.343 "uuid": "fd787090-eb92-4bd6-82d6-b8fbcb9048c5", 00:16:48.343 "strip_size_kb": 0, 00:16:48.343 "state": "configuring", 00:16:48.343 "raid_level": "raid1", 00:16:48.343 "superblock": true, 00:16:48.343 "num_base_bdevs": 3, 00:16:48.343 "num_base_bdevs_discovered": 0, 00:16:48.343 "num_base_bdevs_operational": 3, 00:16:48.343 "base_bdevs_list": [ 00:16:48.343 { 00:16:48.343 "name": "BaseBdev1", 00:16:48.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.343 "is_configured": false, 00:16:48.343 "data_offset": 0, 00:16:48.343 "data_size": 0 00:16:48.343 }, 00:16:48.343 { 00:16:48.343 "name": "BaseBdev2", 00:16:48.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.343 "is_configured": false, 00:16:48.343 "data_offset": 0, 00:16:48.343 "data_size": 0 00:16:48.343 }, 00:16:48.343 { 00:16:48.343 "name": "BaseBdev3", 00:16:48.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.343 "is_configured": false, 00:16:48.343 "data_offset": 0, 00:16:48.343 "data_size": 0 00:16:48.343 } 00:16:48.343 ] 00:16:48.343 }' 00:16:48.602 13:16:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.602 13:16:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.171 13:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:49.171 [2024-07-26 13:16:29.654774] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.171 [2024-07-26 13:16:29.654805] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ec9f40 name Existed_Raid, state configuring 00:16:49.171 13:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:49.430 [2024-07-26 13:16:29.879396] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.430 [2024-07-26 13:16:29.879423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.430 [2024-07-26 13:16:29.879433] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.430 [2024-07-26 13:16:29.879443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.430 [2024-07-26 13:16:29.879451] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.430 [2024-07-26 13:16:29.879461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.430 13:16:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:49.719 [2024-07-26 13:16:30.113581] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.719 BaseBdev1 00:16:49.719 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:49.719 13:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:49.719 13:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:49.719 13:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:49.719 13:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:49.719 13:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:49.719 13:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:49.983 13:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:50.243 [ 00:16:50.243 { 00:16:50.243 "name": "BaseBdev1", 00:16:50.243 "aliases": [ 00:16:50.243 "5bee6940-4550-4637-a19a-757d5fe0e12d" 00:16:50.243 ], 00:16:50.243 "product_name": "Malloc disk", 00:16:50.243 "block_size": 512, 00:16:50.243 "num_blocks": 65536, 00:16:50.243 "uuid": "5bee6940-4550-4637-a19a-757d5fe0e12d", 00:16:50.243 "assigned_rate_limits": { 00:16:50.243 "rw_ios_per_sec": 0, 00:16:50.243 "rw_mbytes_per_sec": 0, 00:16:50.243 "r_mbytes_per_sec": 0, 00:16:50.243 "w_mbytes_per_sec": 0 00:16:50.243 }, 00:16:50.243 "claimed": true, 00:16:50.243 "claim_type": "exclusive_write", 00:16:50.243 "zoned": false, 00:16:50.243 "supported_io_types": { 00:16:50.243 "read": true, 00:16:50.243 "write": true, 00:16:50.243 "unmap": true, 00:16:50.243 "flush": true, 00:16:50.243 "reset": true, 00:16:50.243 "nvme_admin": false, 00:16:50.243 "nvme_io": false, 00:16:50.243 "nvme_io_md": false, 00:16:50.243 "write_zeroes": true, 00:16:50.243 "zcopy": true, 00:16:50.243 "get_zone_info": false, 00:16:50.243 "zone_management": false, 00:16:50.243 "zone_append": false, 00:16:50.243 "compare": false, 00:16:50.243 "compare_and_write": false, 00:16:50.243 "abort": true, 00:16:50.243 "seek_hole": false, 00:16:50.243 "seek_data": false, 00:16:50.243 "copy": true, 00:16:50.243 "nvme_iov_md": false 00:16:50.243 }, 00:16:50.243 "memory_domains": [ 00:16:50.243 { 00:16:50.243 "dma_device_id": "system", 00:16:50.243 "dma_device_type": 1 00:16:50.243 }, 00:16:50.243 { 00:16:50.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.243 "dma_device_type": 2 00:16:50.243 } 00:16:50.243 ], 00:16:50.243 "driver_specific": {} 00:16:50.243 } 00:16:50.243 ] 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.243 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.502 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.502 "name": "Existed_Raid", 00:16:50.502 "uuid": "1fb49f7c-5f3e-4070-b754-418cc9ba148c", 00:16:50.502 "strip_size_kb": 0, 00:16:50.502 "state": "configuring", 00:16:50.502 "raid_level": "raid1", 00:16:50.502 "superblock": true, 00:16:50.502 "num_base_bdevs": 3, 00:16:50.502 "num_base_bdevs_discovered": 1, 00:16:50.502 "num_base_bdevs_operational": 3, 00:16:50.502 "base_bdevs_list": [ 00:16:50.502 { 00:16:50.502 "name": "BaseBdev1", 00:16:50.502 "uuid": "5bee6940-4550-4637-a19a-757d5fe0e12d", 00:16:50.502 "is_configured": true, 00:16:50.502 "data_offset": 2048, 00:16:50.502 "data_size": 63488 00:16:50.502 }, 00:16:50.502 { 00:16:50.502 "name": "BaseBdev2", 00:16:50.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.502 "is_configured": false, 00:16:50.502 "data_offset": 0, 00:16:50.502 "data_size": 0 00:16:50.502 }, 00:16:50.502 { 00:16:50.502 "name": "BaseBdev3", 00:16:50.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.502 "is_configured": false, 00:16:50.502 "data_offset": 0, 00:16:50.502 "data_size": 0 00:16:50.502 } 00:16:50.502 ] 00:16:50.502 }' 00:16:50.502 13:16:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.502 13:16:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.070 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:51.070 [2024-07-26 13:16:31.581465] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.070 [2024-07-26 13:16:31.581502] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ec9810 name Existed_Raid, state configuring 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:51.329 [2024-07-26 13:16:31.814113] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.329 [2024-07-26 13:16:31.815506] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.329 [2024-07-26 13:16:31.815538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.329 [2024-07-26 13:16:31.815552] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.329 [2024-07-26 13:16:31.815563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.329 13:16:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.589 13:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.589 "name": "Existed_Raid", 00:16:51.589 "uuid": "3658f87d-61ee-4448-b047-0688b17c126b", 00:16:51.589 "strip_size_kb": 0, 00:16:51.589 "state": "configuring", 00:16:51.589 "raid_level": "raid1", 00:16:51.589 "superblock": true, 00:16:51.589 "num_base_bdevs": 3, 00:16:51.589 "num_base_bdevs_discovered": 1, 00:16:51.589 "num_base_bdevs_operational": 3, 00:16:51.589 "base_bdevs_list": [ 00:16:51.589 { 00:16:51.589 "name": "BaseBdev1", 00:16:51.589 "uuid": "5bee6940-4550-4637-a19a-757d5fe0e12d", 00:16:51.589 "is_configured": true, 00:16:51.589 "data_offset": 2048, 00:16:51.589 "data_size": 63488 00:16:51.589 }, 00:16:51.589 { 00:16:51.589 "name": "BaseBdev2", 00:16:51.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.589 "is_configured": false, 00:16:51.589 "data_offset": 0, 00:16:51.589 "data_size": 0 00:16:51.589 }, 00:16:51.589 { 00:16:51.589 "name": "BaseBdev3", 00:16:51.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.589 "is_configured": false, 00:16:51.589 "data_offset": 0, 00:16:51.589 "data_size": 0 00:16:51.589 } 00:16:51.589 ] 00:16:51.589 }' 00:16:51.589 13:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.589 13:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.155 13:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:52.413 [2024-07-26 13:16:32.843875] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.414 BaseBdev2 00:16:52.414 13:16:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:52.414 13:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:52.414 13:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:52.414 13:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:52.414 13:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:52.414 13:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:52.414 13:16:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.672 13:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:52.931 [ 00:16:52.931 { 00:16:52.931 "name": "BaseBdev2", 00:16:52.931 "aliases": [ 00:16:52.931 "2bba106e-964a-45d5-9ee4-a331afbd4355" 00:16:52.931 ], 00:16:52.931 "product_name": "Malloc disk", 00:16:52.931 "block_size": 512, 00:16:52.931 "num_blocks": 65536, 00:16:52.931 "uuid": "2bba106e-964a-45d5-9ee4-a331afbd4355", 00:16:52.931 "assigned_rate_limits": { 00:16:52.931 "rw_ios_per_sec": 0, 00:16:52.931 "rw_mbytes_per_sec": 0, 00:16:52.931 "r_mbytes_per_sec": 0, 00:16:52.931 "w_mbytes_per_sec": 0 00:16:52.931 }, 00:16:52.931 "claimed": true, 00:16:52.931 "claim_type": "exclusive_write", 00:16:52.931 "zoned": false, 00:16:52.931 "supported_io_types": { 00:16:52.931 "read": true, 00:16:52.931 "write": true, 00:16:52.931 "unmap": true, 00:16:52.931 "flush": true, 00:16:52.931 "reset": true, 00:16:52.931 "nvme_admin": false, 00:16:52.931 "nvme_io": false, 00:16:52.931 "nvme_io_md": false, 00:16:52.931 "write_zeroes": true, 00:16:52.931 "zcopy": true, 00:16:52.931 "get_zone_info": false, 00:16:52.931 "zone_management": false, 00:16:52.931 "zone_append": false, 00:16:52.931 "compare": false, 00:16:52.931 "compare_and_write": false, 00:16:52.931 "abort": true, 00:16:52.931 "seek_hole": false, 00:16:52.931 "seek_data": false, 00:16:52.931 "copy": true, 00:16:52.931 "nvme_iov_md": false 00:16:52.931 }, 00:16:52.931 "memory_domains": [ 00:16:52.931 { 00:16:52.931 "dma_device_id": "system", 00:16:52.931 "dma_device_type": 1 00:16:52.931 }, 00:16:52.931 { 00:16:52.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.931 "dma_device_type": 2 00:16:52.931 } 00:16:52.931 ], 00:16:52.931 "driver_specific": {} 00:16:52.931 } 00:16:52.931 ] 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.931 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.190 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:53.190 "name": "Existed_Raid", 00:16:53.190 "uuid": "3658f87d-61ee-4448-b047-0688b17c126b", 00:16:53.190 "strip_size_kb": 0, 00:16:53.190 "state": "configuring", 00:16:53.190 "raid_level": "raid1", 00:16:53.190 "superblock": true, 00:16:53.190 "num_base_bdevs": 3, 00:16:53.190 "num_base_bdevs_discovered": 2, 00:16:53.190 "num_base_bdevs_operational": 3, 00:16:53.190 "base_bdevs_list": [ 00:16:53.190 { 00:16:53.190 "name": "BaseBdev1", 00:16:53.190 "uuid": "5bee6940-4550-4637-a19a-757d5fe0e12d", 00:16:53.190 "is_configured": true, 00:16:53.190 "data_offset": 2048, 00:16:53.190 "data_size": 63488 00:16:53.190 }, 00:16:53.190 { 00:16:53.190 "name": "BaseBdev2", 00:16:53.190 "uuid": "2bba106e-964a-45d5-9ee4-a331afbd4355", 00:16:53.190 "is_configured": true, 00:16:53.190 "data_offset": 2048, 00:16:53.190 "data_size": 63488 00:16:53.190 }, 00:16:53.190 { 00:16:53.190 "name": "BaseBdev3", 00:16:53.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.190 "is_configured": false, 00:16:53.190 "data_offset": 0, 00:16:53.191 "data_size": 0 00:16:53.191 } 00:16:53.191 ] 00:16:53.191 }' 00:16:53.191 13:16:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:53.191 13:16:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.758 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:54.016 [2024-07-26 13:16:34.310934] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:54.016 [2024-07-26 13:16:34.311070] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1eca710 00:16:54.016 [2024-07-26 13:16:34.311083] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:54.016 [2024-07-26 13:16:34.311252] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1eca3e0 00:16:54.016 [2024-07-26 13:16:34.311365] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1eca710 00:16:54.016 [2024-07-26 13:16:34.311375] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1eca710 00:16:54.016 [2024-07-26 13:16:34.311462] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.016 BaseBdev3 00:16:54.016 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:54.016 13:16:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:54.016 13:16:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:54.016 13:16:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:54.016 13:16:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:54.016 13:16:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:54.016 13:16:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:54.275 [ 00:16:54.275 { 00:16:54.275 "name": "BaseBdev3", 00:16:54.275 "aliases": [ 00:16:54.275 "582d62c6-af33-49f3-a18f-878bf669ef20" 00:16:54.275 ], 00:16:54.275 "product_name": "Malloc disk", 00:16:54.275 "block_size": 512, 00:16:54.275 "num_blocks": 65536, 00:16:54.275 "uuid": "582d62c6-af33-49f3-a18f-878bf669ef20", 00:16:54.275 "assigned_rate_limits": { 00:16:54.275 "rw_ios_per_sec": 0, 00:16:54.275 "rw_mbytes_per_sec": 0, 00:16:54.275 "r_mbytes_per_sec": 0, 00:16:54.275 "w_mbytes_per_sec": 0 00:16:54.275 }, 00:16:54.275 "claimed": true, 00:16:54.275 "claim_type": "exclusive_write", 00:16:54.275 "zoned": false, 00:16:54.275 "supported_io_types": { 00:16:54.275 "read": true, 00:16:54.275 "write": true, 00:16:54.275 "unmap": true, 00:16:54.275 "flush": true, 00:16:54.275 "reset": true, 00:16:54.275 "nvme_admin": false, 00:16:54.275 "nvme_io": false, 00:16:54.275 "nvme_io_md": false, 00:16:54.275 "write_zeroes": true, 00:16:54.275 "zcopy": true, 00:16:54.275 "get_zone_info": false, 00:16:54.275 "zone_management": false, 00:16:54.275 "zone_append": false, 00:16:54.275 "compare": false, 00:16:54.275 "compare_and_write": false, 00:16:54.275 "abort": true, 00:16:54.275 "seek_hole": false, 00:16:54.275 "seek_data": false, 00:16:54.275 "copy": true, 00:16:54.275 "nvme_iov_md": false 00:16:54.275 }, 00:16:54.275 "memory_domains": [ 00:16:54.275 { 00:16:54.275 "dma_device_id": "system", 00:16:54.275 "dma_device_type": 1 00:16:54.275 }, 00:16:54.275 { 00:16:54.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.275 "dma_device_type": 2 00:16:54.275 } 00:16:54.275 ], 00:16:54.275 "driver_specific": {} 00:16:54.275 } 00:16:54.275 ] 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.275 13:16:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.534 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.534 "name": "Existed_Raid", 00:16:54.534 "uuid": "3658f87d-61ee-4448-b047-0688b17c126b", 00:16:54.534 "strip_size_kb": 0, 00:16:54.534 "state": "online", 00:16:54.534 "raid_level": "raid1", 00:16:54.534 "superblock": true, 00:16:54.534 "num_base_bdevs": 3, 00:16:54.534 "num_base_bdevs_discovered": 3, 00:16:54.534 "num_base_bdevs_operational": 3, 00:16:54.534 "base_bdevs_list": [ 00:16:54.534 { 00:16:54.534 "name": "BaseBdev1", 00:16:54.534 "uuid": "5bee6940-4550-4637-a19a-757d5fe0e12d", 00:16:54.534 "is_configured": true, 00:16:54.534 "data_offset": 2048, 00:16:54.534 "data_size": 63488 00:16:54.534 }, 00:16:54.534 { 00:16:54.534 "name": "BaseBdev2", 00:16:54.534 "uuid": "2bba106e-964a-45d5-9ee4-a331afbd4355", 00:16:54.534 "is_configured": true, 00:16:54.534 "data_offset": 2048, 00:16:54.534 "data_size": 63488 00:16:54.534 }, 00:16:54.534 { 00:16:54.534 "name": "BaseBdev3", 00:16:54.534 "uuid": "582d62c6-af33-49f3-a18f-878bf669ef20", 00:16:54.534 "is_configured": true, 00:16:54.534 "data_offset": 2048, 00:16:54.534 "data_size": 63488 00:16:54.534 } 00:16:54.534 ] 00:16:54.534 }' 00:16:54.534 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.534 13:16:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.101 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:55.101 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:55.101 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:55.101 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:55.101 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:55.101 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:55.101 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:55.101 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:55.360 [2024-07-26 13:16:35.791117] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:55.360 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:55.360 "name": "Existed_Raid", 00:16:55.360 "aliases": [ 00:16:55.360 "3658f87d-61ee-4448-b047-0688b17c126b" 00:16:55.360 ], 00:16:55.360 "product_name": "Raid Volume", 00:16:55.360 "block_size": 512, 00:16:55.360 "num_blocks": 63488, 00:16:55.360 "uuid": "3658f87d-61ee-4448-b047-0688b17c126b", 00:16:55.360 "assigned_rate_limits": { 00:16:55.360 "rw_ios_per_sec": 0, 00:16:55.360 "rw_mbytes_per_sec": 0, 00:16:55.360 "r_mbytes_per_sec": 0, 00:16:55.360 "w_mbytes_per_sec": 0 00:16:55.360 }, 00:16:55.360 "claimed": false, 00:16:55.360 "zoned": false, 00:16:55.360 "supported_io_types": { 00:16:55.360 "read": true, 00:16:55.360 "write": true, 00:16:55.360 "unmap": false, 00:16:55.360 "flush": false, 00:16:55.360 "reset": true, 00:16:55.360 "nvme_admin": false, 00:16:55.360 "nvme_io": false, 00:16:55.360 "nvme_io_md": false, 00:16:55.360 "write_zeroes": true, 00:16:55.360 "zcopy": false, 00:16:55.360 "get_zone_info": false, 00:16:55.360 "zone_management": false, 00:16:55.360 "zone_append": false, 00:16:55.360 "compare": false, 00:16:55.360 "compare_and_write": false, 00:16:55.360 "abort": false, 00:16:55.360 "seek_hole": false, 00:16:55.360 "seek_data": false, 00:16:55.360 "copy": false, 00:16:55.360 "nvme_iov_md": false 00:16:55.360 }, 00:16:55.360 "memory_domains": [ 00:16:55.360 { 00:16:55.360 "dma_device_id": "system", 00:16:55.360 "dma_device_type": 1 00:16:55.360 }, 00:16:55.360 { 00:16:55.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.360 "dma_device_type": 2 00:16:55.360 }, 00:16:55.360 { 00:16:55.360 "dma_device_id": "system", 00:16:55.360 "dma_device_type": 1 00:16:55.360 }, 00:16:55.360 { 00:16:55.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.360 "dma_device_type": 2 00:16:55.360 }, 00:16:55.360 { 00:16:55.360 "dma_device_id": "system", 00:16:55.360 "dma_device_type": 1 00:16:55.360 }, 00:16:55.360 { 00:16:55.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.360 "dma_device_type": 2 00:16:55.360 } 00:16:55.360 ], 00:16:55.360 "driver_specific": { 00:16:55.360 "raid": { 00:16:55.360 "uuid": "3658f87d-61ee-4448-b047-0688b17c126b", 00:16:55.360 "strip_size_kb": 0, 00:16:55.360 "state": "online", 00:16:55.360 "raid_level": "raid1", 00:16:55.360 "superblock": true, 00:16:55.360 "num_base_bdevs": 3, 00:16:55.360 "num_base_bdevs_discovered": 3, 00:16:55.360 "num_base_bdevs_operational": 3, 00:16:55.360 "base_bdevs_list": [ 00:16:55.360 { 00:16:55.360 "name": "BaseBdev1", 00:16:55.360 "uuid": "5bee6940-4550-4637-a19a-757d5fe0e12d", 00:16:55.360 "is_configured": true, 00:16:55.360 "data_offset": 2048, 00:16:55.360 "data_size": 63488 00:16:55.360 }, 00:16:55.360 { 00:16:55.360 "name": "BaseBdev2", 00:16:55.360 "uuid": "2bba106e-964a-45d5-9ee4-a331afbd4355", 00:16:55.360 "is_configured": true, 00:16:55.360 "data_offset": 2048, 00:16:55.360 "data_size": 63488 00:16:55.360 }, 00:16:55.360 { 00:16:55.360 "name": "BaseBdev3", 00:16:55.360 "uuid": "582d62c6-af33-49f3-a18f-878bf669ef20", 00:16:55.360 "is_configured": true, 00:16:55.360 "data_offset": 2048, 00:16:55.360 "data_size": 63488 00:16:55.360 } 00:16:55.360 ] 00:16:55.360 } 00:16:55.360 } 00:16:55.360 }' 00:16:55.360 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:55.360 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:55.360 BaseBdev2 00:16:55.360 BaseBdev3' 00:16:55.360 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:55.360 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:55.360 13:16:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:55.620 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:55.620 "name": "BaseBdev1", 00:16:55.620 "aliases": [ 00:16:55.620 "5bee6940-4550-4637-a19a-757d5fe0e12d" 00:16:55.620 ], 00:16:55.620 "product_name": "Malloc disk", 00:16:55.620 "block_size": 512, 00:16:55.620 "num_blocks": 65536, 00:16:55.620 "uuid": "5bee6940-4550-4637-a19a-757d5fe0e12d", 00:16:55.620 "assigned_rate_limits": { 00:16:55.620 "rw_ios_per_sec": 0, 00:16:55.620 "rw_mbytes_per_sec": 0, 00:16:55.620 "r_mbytes_per_sec": 0, 00:16:55.620 "w_mbytes_per_sec": 0 00:16:55.620 }, 00:16:55.620 "claimed": true, 00:16:55.620 "claim_type": "exclusive_write", 00:16:55.620 "zoned": false, 00:16:55.620 "supported_io_types": { 00:16:55.620 "read": true, 00:16:55.620 "write": true, 00:16:55.620 "unmap": true, 00:16:55.620 "flush": true, 00:16:55.620 "reset": true, 00:16:55.620 "nvme_admin": false, 00:16:55.620 "nvme_io": false, 00:16:55.620 "nvme_io_md": false, 00:16:55.620 "write_zeroes": true, 00:16:55.620 "zcopy": true, 00:16:55.620 "get_zone_info": false, 00:16:55.620 "zone_management": false, 00:16:55.620 "zone_append": false, 00:16:55.620 "compare": false, 00:16:55.620 "compare_and_write": false, 00:16:55.620 "abort": true, 00:16:55.620 "seek_hole": false, 00:16:55.620 "seek_data": false, 00:16:55.620 "copy": true, 00:16:55.620 "nvme_iov_md": false 00:16:55.620 }, 00:16:55.620 "memory_domains": [ 00:16:55.620 { 00:16:55.620 "dma_device_id": "system", 00:16:55.620 "dma_device_type": 1 00:16:55.620 }, 00:16:55.620 { 00:16:55.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.620 "dma_device_type": 2 00:16:55.620 } 00:16:55.620 ], 00:16:55.620 "driver_specific": {} 00:16:55.620 }' 00:16:55.620 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.620 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:55.899 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:55.900 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.900 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:55.900 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:55.900 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.900 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:55.900 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:55.900 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:55.900 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.165 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:56.165 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:56.165 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:56.165 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:56.165 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:56.165 "name": "BaseBdev2", 00:16:56.165 "aliases": [ 00:16:56.165 "2bba106e-964a-45d5-9ee4-a331afbd4355" 00:16:56.165 ], 00:16:56.165 "product_name": "Malloc disk", 00:16:56.165 "block_size": 512, 00:16:56.165 "num_blocks": 65536, 00:16:56.165 "uuid": "2bba106e-964a-45d5-9ee4-a331afbd4355", 00:16:56.165 "assigned_rate_limits": { 00:16:56.165 "rw_ios_per_sec": 0, 00:16:56.165 "rw_mbytes_per_sec": 0, 00:16:56.165 "r_mbytes_per_sec": 0, 00:16:56.165 "w_mbytes_per_sec": 0 00:16:56.165 }, 00:16:56.165 "claimed": true, 00:16:56.165 "claim_type": "exclusive_write", 00:16:56.165 "zoned": false, 00:16:56.165 "supported_io_types": { 00:16:56.165 "read": true, 00:16:56.165 "write": true, 00:16:56.165 "unmap": true, 00:16:56.165 "flush": true, 00:16:56.165 "reset": true, 00:16:56.165 "nvme_admin": false, 00:16:56.165 "nvme_io": false, 00:16:56.165 "nvme_io_md": false, 00:16:56.165 "write_zeroes": true, 00:16:56.165 "zcopy": true, 00:16:56.165 "get_zone_info": false, 00:16:56.165 "zone_management": false, 00:16:56.165 "zone_append": false, 00:16:56.165 "compare": false, 00:16:56.165 "compare_and_write": false, 00:16:56.165 "abort": true, 00:16:56.165 "seek_hole": false, 00:16:56.165 "seek_data": false, 00:16:56.165 "copy": true, 00:16:56.165 "nvme_iov_md": false 00:16:56.165 }, 00:16:56.165 "memory_domains": [ 00:16:56.165 { 00:16:56.165 "dma_device_id": "system", 00:16:56.165 "dma_device_type": 1 00:16:56.165 }, 00:16:56.165 { 00:16:56.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.165 "dma_device_type": 2 00:16:56.165 } 00:16:56.165 ], 00:16:56.165 "driver_specific": {} 00:16:56.165 }' 00:16:56.165 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.424 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.424 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:56.424 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:56.424 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:56.424 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:56.424 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.424 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.424 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:56.424 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.683 13:16:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:56.683 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:56.683 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:56.683 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:56.683 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:56.975 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:56.975 "name": "BaseBdev3", 00:16:56.975 "aliases": [ 00:16:56.975 "582d62c6-af33-49f3-a18f-878bf669ef20" 00:16:56.975 ], 00:16:56.975 "product_name": "Malloc disk", 00:16:56.975 "block_size": 512, 00:16:56.975 "num_blocks": 65536, 00:16:56.975 "uuid": "582d62c6-af33-49f3-a18f-878bf669ef20", 00:16:56.975 "assigned_rate_limits": { 00:16:56.975 "rw_ios_per_sec": 0, 00:16:56.975 "rw_mbytes_per_sec": 0, 00:16:56.975 "r_mbytes_per_sec": 0, 00:16:56.975 "w_mbytes_per_sec": 0 00:16:56.975 }, 00:16:56.975 "claimed": true, 00:16:56.975 "claim_type": "exclusive_write", 00:16:56.975 "zoned": false, 00:16:56.975 "supported_io_types": { 00:16:56.975 "read": true, 00:16:56.975 "write": true, 00:16:56.975 "unmap": true, 00:16:56.975 "flush": true, 00:16:56.975 "reset": true, 00:16:56.975 "nvme_admin": false, 00:16:56.976 "nvme_io": false, 00:16:56.976 "nvme_io_md": false, 00:16:56.976 "write_zeroes": true, 00:16:56.976 "zcopy": true, 00:16:56.976 "get_zone_info": false, 00:16:56.976 "zone_management": false, 00:16:56.976 "zone_append": false, 00:16:56.976 "compare": false, 00:16:56.976 "compare_and_write": false, 00:16:56.976 "abort": true, 00:16:56.976 "seek_hole": false, 00:16:56.976 "seek_data": false, 00:16:56.976 "copy": true, 00:16:56.976 "nvme_iov_md": false 00:16:56.976 }, 00:16:56.976 "memory_domains": [ 00:16:56.976 { 00:16:56.976 "dma_device_id": "system", 00:16:56.976 "dma_device_type": 1 00:16:56.976 }, 00:16:56.976 { 00:16:56.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.976 "dma_device_type": 2 00:16:56.976 } 00:16:56.976 ], 00:16:56.976 "driver_specific": {} 00:16:56.976 }' 00:16:56.976 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.976 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:56.976 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:56.976 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:56.976 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:56.976 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:56.976 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.976 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:56.976 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:56.976 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:57.235 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:57.235 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:57.235 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:57.495 [2024-07-26 13:16:37.768096] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.495 13:16:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.495 13:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.495 "name": "Existed_Raid", 00:16:57.495 "uuid": "3658f87d-61ee-4448-b047-0688b17c126b", 00:16:57.495 "strip_size_kb": 0, 00:16:57.495 "state": "online", 00:16:57.495 "raid_level": "raid1", 00:16:57.495 "superblock": true, 00:16:57.495 "num_base_bdevs": 3, 00:16:57.495 "num_base_bdevs_discovered": 2, 00:16:57.495 "num_base_bdevs_operational": 2, 00:16:57.495 "base_bdevs_list": [ 00:16:57.495 { 00:16:57.495 "name": null, 00:16:57.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.495 "is_configured": false, 00:16:57.495 "data_offset": 2048, 00:16:57.495 "data_size": 63488 00:16:57.495 }, 00:16:57.495 { 00:16:57.495 "name": "BaseBdev2", 00:16:57.495 "uuid": "2bba106e-964a-45d5-9ee4-a331afbd4355", 00:16:57.495 "is_configured": true, 00:16:57.495 "data_offset": 2048, 00:16:57.495 "data_size": 63488 00:16:57.495 }, 00:16:57.495 { 00:16:57.495 "name": "BaseBdev3", 00:16:57.495 "uuid": "582d62c6-af33-49f3-a18f-878bf669ef20", 00:16:57.495 "is_configured": true, 00:16:57.495 "data_offset": 2048, 00:16:57.495 "data_size": 63488 00:16:57.495 } 00:16:57.495 ] 00:16:57.495 }' 00:16:57.495 13:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.495 13:16:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.434 13:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:58.434 13:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:58.434 13:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:58.434 13:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.434 13:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:58.434 13:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:58.434 13:16:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:58.693 [2024-07-26 13:16:39.048523] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:58.694 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:58.694 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:58.694 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.694 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:58.953 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:58.953 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:58.953 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:59.213 [2024-07-26 13:16:39.511658] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:59.213 [2024-07-26 13:16:39.511732] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.213 [2024-07-26 13:16:39.521918] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.213 [2024-07-26 13:16:39.521946] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.213 [2024-07-26 13:16:39.521956] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1eca710 name Existed_Raid, state offline 00:16:59.213 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:59.213 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:59.213 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.213 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:59.473 BaseBdev2 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:59.473 13:16:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.732 13:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:59.991 [ 00:16:59.991 { 00:16:59.991 "name": "BaseBdev2", 00:16:59.991 "aliases": [ 00:16:59.991 "9a30de59-79bd-4613-bead-21c484485912" 00:16:59.991 ], 00:16:59.991 "product_name": "Malloc disk", 00:16:59.991 "block_size": 512, 00:16:59.991 "num_blocks": 65536, 00:16:59.991 "uuid": "9a30de59-79bd-4613-bead-21c484485912", 00:16:59.991 "assigned_rate_limits": { 00:16:59.991 "rw_ios_per_sec": 0, 00:16:59.991 "rw_mbytes_per_sec": 0, 00:16:59.991 "r_mbytes_per_sec": 0, 00:16:59.991 "w_mbytes_per_sec": 0 00:16:59.991 }, 00:16:59.991 "claimed": false, 00:16:59.991 "zoned": false, 00:16:59.991 "supported_io_types": { 00:16:59.991 "read": true, 00:16:59.991 "write": true, 00:16:59.991 "unmap": true, 00:16:59.991 "flush": true, 00:16:59.991 "reset": true, 00:16:59.991 "nvme_admin": false, 00:16:59.991 "nvme_io": false, 00:16:59.991 "nvme_io_md": false, 00:16:59.991 "write_zeroes": true, 00:16:59.991 "zcopy": true, 00:16:59.991 "get_zone_info": false, 00:16:59.991 "zone_management": false, 00:16:59.991 "zone_append": false, 00:16:59.991 "compare": false, 00:16:59.991 "compare_and_write": false, 00:16:59.991 "abort": true, 00:16:59.991 "seek_hole": false, 00:16:59.991 "seek_data": false, 00:16:59.991 "copy": true, 00:16:59.991 "nvme_iov_md": false 00:16:59.991 }, 00:16:59.991 "memory_domains": [ 00:16:59.991 { 00:16:59.991 "dma_device_id": "system", 00:16:59.991 "dma_device_type": 1 00:16:59.991 }, 00:16:59.991 { 00:16:59.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.991 "dma_device_type": 2 00:16:59.991 } 00:16:59.991 ], 00:16:59.991 "driver_specific": {} 00:16:59.991 } 00:16:59.991 ] 00:16:59.991 13:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:59.991 13:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:59.991 13:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:59.991 13:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:00.251 BaseBdev3 00:17:00.251 13:16:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:00.251 13:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:00.251 13:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:00.251 13:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:00.251 13:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:00.251 13:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:00.251 13:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.510 13:16:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:00.769 [ 00:17:00.769 { 00:17:00.769 "name": "BaseBdev3", 00:17:00.769 "aliases": [ 00:17:00.769 "53f1c1f2-3094-4c6d-b48c-5b746cf66d57" 00:17:00.769 ], 00:17:00.769 "product_name": "Malloc disk", 00:17:00.769 "block_size": 512, 00:17:00.769 "num_blocks": 65536, 00:17:00.769 "uuid": "53f1c1f2-3094-4c6d-b48c-5b746cf66d57", 00:17:00.769 "assigned_rate_limits": { 00:17:00.769 "rw_ios_per_sec": 0, 00:17:00.769 "rw_mbytes_per_sec": 0, 00:17:00.769 "r_mbytes_per_sec": 0, 00:17:00.769 "w_mbytes_per_sec": 0 00:17:00.769 }, 00:17:00.769 "claimed": false, 00:17:00.769 "zoned": false, 00:17:00.769 "supported_io_types": { 00:17:00.769 "read": true, 00:17:00.769 "write": true, 00:17:00.769 "unmap": true, 00:17:00.769 "flush": true, 00:17:00.769 "reset": true, 00:17:00.769 "nvme_admin": false, 00:17:00.769 "nvme_io": false, 00:17:00.769 "nvme_io_md": false, 00:17:00.769 "write_zeroes": true, 00:17:00.769 "zcopy": true, 00:17:00.769 "get_zone_info": false, 00:17:00.769 "zone_management": false, 00:17:00.769 "zone_append": false, 00:17:00.769 "compare": false, 00:17:00.769 "compare_and_write": false, 00:17:00.769 "abort": true, 00:17:00.769 "seek_hole": false, 00:17:00.769 "seek_data": false, 00:17:00.769 "copy": true, 00:17:00.769 "nvme_iov_md": false 00:17:00.769 }, 00:17:00.769 "memory_domains": [ 00:17:00.769 { 00:17:00.769 "dma_device_id": "system", 00:17:00.769 "dma_device_type": 1 00:17:00.769 }, 00:17:00.769 { 00:17:00.769 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.769 "dma_device_type": 2 00:17:00.769 } 00:17:00.769 ], 00:17:00.769 "driver_specific": {} 00:17:00.769 } 00:17:00.769 ] 00:17:00.769 13:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:00.769 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:00.769 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:00.769 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:00.769 [2024-07-26 13:16:41.282789] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.769 [2024-07-26 13:16:41.282825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.769 [2024-07-26 13:16:41.282841] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:00.769 [2024-07-26 13:16:41.284055] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:01.029 "name": "Existed_Raid", 00:17:01.029 "uuid": "87277fb9-c90a-470c-9b36-71ac2d730930", 00:17:01.029 "strip_size_kb": 0, 00:17:01.029 "state": "configuring", 00:17:01.029 "raid_level": "raid1", 00:17:01.029 "superblock": true, 00:17:01.029 "num_base_bdevs": 3, 00:17:01.029 "num_base_bdevs_discovered": 2, 00:17:01.029 "num_base_bdevs_operational": 3, 00:17:01.029 "base_bdevs_list": [ 00:17:01.029 { 00:17:01.029 "name": "BaseBdev1", 00:17:01.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.029 "is_configured": false, 00:17:01.029 "data_offset": 0, 00:17:01.029 "data_size": 0 00:17:01.029 }, 00:17:01.029 { 00:17:01.029 "name": "BaseBdev2", 00:17:01.029 "uuid": "9a30de59-79bd-4613-bead-21c484485912", 00:17:01.029 "is_configured": true, 00:17:01.029 "data_offset": 2048, 00:17:01.029 "data_size": 63488 00:17:01.029 }, 00:17:01.029 { 00:17:01.029 "name": "BaseBdev3", 00:17:01.029 "uuid": "53f1c1f2-3094-4c6d-b48c-5b746cf66d57", 00:17:01.029 "is_configured": true, 00:17:01.029 "data_offset": 2048, 00:17:01.029 "data_size": 63488 00:17:01.029 } 00:17:01.029 ] 00:17:01.029 }' 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:01.029 13:16:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.598 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:01.857 [2024-07-26 13:16:42.309465] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:01.857 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:01.857 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:01.857 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:01.857 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:01.857 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:01.857 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:01.857 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:01.857 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:01.857 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:01.857 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:01.857 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.857 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.116 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.116 "name": "Existed_Raid", 00:17:02.116 "uuid": "87277fb9-c90a-470c-9b36-71ac2d730930", 00:17:02.116 "strip_size_kb": 0, 00:17:02.116 "state": "configuring", 00:17:02.116 "raid_level": "raid1", 00:17:02.116 "superblock": true, 00:17:02.116 "num_base_bdevs": 3, 00:17:02.116 "num_base_bdevs_discovered": 1, 00:17:02.116 "num_base_bdevs_operational": 3, 00:17:02.116 "base_bdevs_list": [ 00:17:02.116 { 00:17:02.116 "name": "BaseBdev1", 00:17:02.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.116 "is_configured": false, 00:17:02.116 "data_offset": 0, 00:17:02.116 "data_size": 0 00:17:02.116 }, 00:17:02.116 { 00:17:02.116 "name": null, 00:17:02.116 "uuid": "9a30de59-79bd-4613-bead-21c484485912", 00:17:02.116 "is_configured": false, 00:17:02.116 "data_offset": 2048, 00:17:02.116 "data_size": 63488 00:17:02.116 }, 00:17:02.116 { 00:17:02.116 "name": "BaseBdev3", 00:17:02.116 "uuid": "53f1c1f2-3094-4c6d-b48c-5b746cf66d57", 00:17:02.116 "is_configured": true, 00:17:02.116 "data_offset": 2048, 00:17:02.117 "data_size": 63488 00:17:02.117 } 00:17:02.117 ] 00:17:02.117 }' 00:17:02.117 13:16:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.117 13:16:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.684 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.684 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:02.956 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:02.956 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:03.218 [2024-07-26 13:16:43.503678] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.218 BaseBdev1 00:17:03.218 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:03.218 13:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:03.218 13:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:03.218 13:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:03.218 13:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:03.218 13:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:03.218 13:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:03.218 13:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:03.477 [ 00:17:03.477 { 00:17:03.477 "name": "BaseBdev1", 00:17:03.477 "aliases": [ 00:17:03.477 "cfd547ae-5586-42ce-b51f-195971b7e1a1" 00:17:03.477 ], 00:17:03.477 "product_name": "Malloc disk", 00:17:03.477 "block_size": 512, 00:17:03.477 "num_blocks": 65536, 00:17:03.477 "uuid": "cfd547ae-5586-42ce-b51f-195971b7e1a1", 00:17:03.477 "assigned_rate_limits": { 00:17:03.477 "rw_ios_per_sec": 0, 00:17:03.477 "rw_mbytes_per_sec": 0, 00:17:03.477 "r_mbytes_per_sec": 0, 00:17:03.477 "w_mbytes_per_sec": 0 00:17:03.477 }, 00:17:03.477 "claimed": true, 00:17:03.477 "claim_type": "exclusive_write", 00:17:03.477 "zoned": false, 00:17:03.477 "supported_io_types": { 00:17:03.477 "read": true, 00:17:03.477 "write": true, 00:17:03.477 "unmap": true, 00:17:03.477 "flush": true, 00:17:03.477 "reset": true, 00:17:03.477 "nvme_admin": false, 00:17:03.477 "nvme_io": false, 00:17:03.477 "nvme_io_md": false, 00:17:03.477 "write_zeroes": true, 00:17:03.477 "zcopy": true, 00:17:03.477 "get_zone_info": false, 00:17:03.477 "zone_management": false, 00:17:03.477 "zone_append": false, 00:17:03.477 "compare": false, 00:17:03.477 "compare_and_write": false, 00:17:03.477 "abort": true, 00:17:03.477 "seek_hole": false, 00:17:03.477 "seek_data": false, 00:17:03.477 "copy": true, 00:17:03.477 "nvme_iov_md": false 00:17:03.477 }, 00:17:03.477 "memory_domains": [ 00:17:03.477 { 00:17:03.477 "dma_device_id": "system", 00:17:03.477 "dma_device_type": 1 00:17:03.477 }, 00:17:03.477 { 00:17:03.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.477 "dma_device_type": 2 00:17:03.477 } 00:17:03.477 ], 00:17:03.477 "driver_specific": {} 00:17:03.477 } 00:17:03.477 ] 00:17:03.477 13:16:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:03.477 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:03.477 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:03.477 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:03.477 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:03.477 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:03.477 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:03.477 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.477 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.477 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.477 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.478 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.478 13:16:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.736 13:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.736 "name": "Existed_Raid", 00:17:03.736 "uuid": "87277fb9-c90a-470c-9b36-71ac2d730930", 00:17:03.736 "strip_size_kb": 0, 00:17:03.736 "state": "configuring", 00:17:03.736 "raid_level": "raid1", 00:17:03.736 "superblock": true, 00:17:03.736 "num_base_bdevs": 3, 00:17:03.736 "num_base_bdevs_discovered": 2, 00:17:03.736 "num_base_bdevs_operational": 3, 00:17:03.736 "base_bdevs_list": [ 00:17:03.736 { 00:17:03.736 "name": "BaseBdev1", 00:17:03.736 "uuid": "cfd547ae-5586-42ce-b51f-195971b7e1a1", 00:17:03.736 "is_configured": true, 00:17:03.736 "data_offset": 2048, 00:17:03.736 "data_size": 63488 00:17:03.736 }, 00:17:03.736 { 00:17:03.736 "name": null, 00:17:03.736 "uuid": "9a30de59-79bd-4613-bead-21c484485912", 00:17:03.736 "is_configured": false, 00:17:03.736 "data_offset": 2048, 00:17:03.736 "data_size": 63488 00:17:03.736 }, 00:17:03.736 { 00:17:03.736 "name": "BaseBdev3", 00:17:03.736 "uuid": "53f1c1f2-3094-4c6d-b48c-5b746cf66d57", 00:17:03.736 "is_configured": true, 00:17:03.736 "data_offset": 2048, 00:17:03.736 "data_size": 63488 00:17:03.736 } 00:17:03.736 ] 00:17:03.736 }' 00:17:03.737 13:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.737 13:16:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.339 13:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.339 13:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:04.598 13:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:04.598 13:16:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:04.858 [2024-07-26 13:16:45.152025] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:04.858 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:04.858 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:04.858 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:04.858 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:04.858 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:04.858 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:04.858 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.858 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.858 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.858 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.858 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.858 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.117 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.117 "name": "Existed_Raid", 00:17:05.117 "uuid": "87277fb9-c90a-470c-9b36-71ac2d730930", 00:17:05.117 "strip_size_kb": 0, 00:17:05.117 "state": "configuring", 00:17:05.117 "raid_level": "raid1", 00:17:05.117 "superblock": true, 00:17:05.117 "num_base_bdevs": 3, 00:17:05.117 "num_base_bdevs_discovered": 1, 00:17:05.117 "num_base_bdevs_operational": 3, 00:17:05.117 "base_bdevs_list": [ 00:17:05.117 { 00:17:05.117 "name": "BaseBdev1", 00:17:05.117 "uuid": "cfd547ae-5586-42ce-b51f-195971b7e1a1", 00:17:05.117 "is_configured": true, 00:17:05.117 "data_offset": 2048, 00:17:05.117 "data_size": 63488 00:17:05.117 }, 00:17:05.117 { 00:17:05.117 "name": null, 00:17:05.117 "uuid": "9a30de59-79bd-4613-bead-21c484485912", 00:17:05.117 "is_configured": false, 00:17:05.117 "data_offset": 2048, 00:17:05.117 "data_size": 63488 00:17:05.117 }, 00:17:05.117 { 00:17:05.117 "name": null, 00:17:05.117 "uuid": "53f1c1f2-3094-4c6d-b48c-5b746cf66d57", 00:17:05.117 "is_configured": false, 00:17:05.117 "data_offset": 2048, 00:17:05.117 "data_size": 63488 00:17:05.117 } 00:17:05.117 ] 00:17:05.117 }' 00:17:05.117 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.117 13:16:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.685 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.685 13:16:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:05.685 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:05.685 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:05.944 [2024-07-26 13:16:46.411368] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.944 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:05.944 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:05.944 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:05.944 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:05.944 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:05.944 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:05.944 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.944 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.944 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.944 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.944 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.944 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.204 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.204 "name": "Existed_Raid", 00:17:06.204 "uuid": "87277fb9-c90a-470c-9b36-71ac2d730930", 00:17:06.204 "strip_size_kb": 0, 00:17:06.204 "state": "configuring", 00:17:06.204 "raid_level": "raid1", 00:17:06.204 "superblock": true, 00:17:06.204 "num_base_bdevs": 3, 00:17:06.204 "num_base_bdevs_discovered": 2, 00:17:06.204 "num_base_bdevs_operational": 3, 00:17:06.204 "base_bdevs_list": [ 00:17:06.204 { 00:17:06.204 "name": "BaseBdev1", 00:17:06.204 "uuid": "cfd547ae-5586-42ce-b51f-195971b7e1a1", 00:17:06.204 "is_configured": true, 00:17:06.204 "data_offset": 2048, 00:17:06.204 "data_size": 63488 00:17:06.204 }, 00:17:06.204 { 00:17:06.204 "name": null, 00:17:06.204 "uuid": "9a30de59-79bd-4613-bead-21c484485912", 00:17:06.204 "is_configured": false, 00:17:06.204 "data_offset": 2048, 00:17:06.204 "data_size": 63488 00:17:06.204 }, 00:17:06.204 { 00:17:06.204 "name": "BaseBdev3", 00:17:06.204 "uuid": "53f1c1f2-3094-4c6d-b48c-5b746cf66d57", 00:17:06.204 "is_configured": true, 00:17:06.204 "data_offset": 2048, 00:17:06.204 "data_size": 63488 00:17:06.204 } 00:17:06.204 ] 00:17:06.204 }' 00:17:06.204 13:16:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.204 13:16:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.774 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.774 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:07.034 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:07.034 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:07.293 [2024-07-26 13:16:47.662897] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:07.293 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:07.293 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.293 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:07.293 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:07.293 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:07.293 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:07.293 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.293 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.293 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.293 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.293 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.293 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.553 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.553 "name": "Existed_Raid", 00:17:07.553 "uuid": "87277fb9-c90a-470c-9b36-71ac2d730930", 00:17:07.553 "strip_size_kb": 0, 00:17:07.553 "state": "configuring", 00:17:07.553 "raid_level": "raid1", 00:17:07.553 "superblock": true, 00:17:07.553 "num_base_bdevs": 3, 00:17:07.553 "num_base_bdevs_discovered": 1, 00:17:07.553 "num_base_bdevs_operational": 3, 00:17:07.553 "base_bdevs_list": [ 00:17:07.553 { 00:17:07.553 "name": null, 00:17:07.553 "uuid": "cfd547ae-5586-42ce-b51f-195971b7e1a1", 00:17:07.553 "is_configured": false, 00:17:07.553 "data_offset": 2048, 00:17:07.553 "data_size": 63488 00:17:07.553 }, 00:17:07.553 { 00:17:07.553 "name": null, 00:17:07.553 "uuid": "9a30de59-79bd-4613-bead-21c484485912", 00:17:07.553 "is_configured": false, 00:17:07.553 "data_offset": 2048, 00:17:07.553 "data_size": 63488 00:17:07.553 }, 00:17:07.553 { 00:17:07.553 "name": "BaseBdev3", 00:17:07.553 "uuid": "53f1c1f2-3094-4c6d-b48c-5b746cf66d57", 00:17:07.553 "is_configured": true, 00:17:07.553 "data_offset": 2048, 00:17:07.553 "data_size": 63488 00:17:07.553 } 00:17:07.553 ] 00:17:07.553 }' 00:17:07.553 13:16:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.553 13:16:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.121 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:08.121 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.380 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:08.380 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:08.639 [2024-07-26 13:16:48.911976] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.639 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:08.639 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:08.639 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:08.639 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:08.639 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:08.639 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:08.639 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.639 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.639 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.639 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.639 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.639 13:16:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.639 13:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.639 "name": "Existed_Raid", 00:17:08.639 "uuid": "87277fb9-c90a-470c-9b36-71ac2d730930", 00:17:08.639 "strip_size_kb": 0, 00:17:08.639 "state": "configuring", 00:17:08.639 "raid_level": "raid1", 00:17:08.639 "superblock": true, 00:17:08.639 "num_base_bdevs": 3, 00:17:08.639 "num_base_bdevs_discovered": 2, 00:17:08.639 "num_base_bdevs_operational": 3, 00:17:08.639 "base_bdevs_list": [ 00:17:08.639 { 00:17:08.639 "name": null, 00:17:08.639 "uuid": "cfd547ae-5586-42ce-b51f-195971b7e1a1", 00:17:08.640 "is_configured": false, 00:17:08.640 "data_offset": 2048, 00:17:08.640 "data_size": 63488 00:17:08.640 }, 00:17:08.640 { 00:17:08.640 "name": "BaseBdev2", 00:17:08.640 "uuid": "9a30de59-79bd-4613-bead-21c484485912", 00:17:08.640 "is_configured": true, 00:17:08.640 "data_offset": 2048, 00:17:08.640 "data_size": 63488 00:17:08.640 }, 00:17:08.640 { 00:17:08.640 "name": "BaseBdev3", 00:17:08.640 "uuid": "53f1c1f2-3094-4c6d-b48c-5b746cf66d57", 00:17:08.640 "is_configured": true, 00:17:08.640 "data_offset": 2048, 00:17:08.640 "data_size": 63488 00:17:08.640 } 00:17:08.640 ] 00:17:08.640 }' 00:17:08.640 13:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.640 13:16:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.208 13:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.208 13:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:09.467 13:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:09.467 13:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.467 13:16:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:09.725 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u cfd547ae-5586-42ce-b51f-195971b7e1a1 00:17:10.293 [2024-07-26 13:16:50.535434] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:10.293 [2024-07-26 13:16:50.535567] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1ec1650 00:17:10.293 [2024-07-26 13:16:50.535579] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:10.293 [2024-07-26 13:16:50.535747] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1eca3e0 00:17:10.293 [2024-07-26 13:16:50.535854] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1ec1650 00:17:10.293 [2024-07-26 13:16:50.535863] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1ec1650 00:17:10.293 [2024-07-26 13:16:50.535944] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.293 NewBaseBdev 00:17:10.293 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:10.293 13:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:10.293 13:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:10.293 13:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:10.293 13:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:10.293 13:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:10.293 13:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.293 13:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:10.553 [ 00:17:10.553 { 00:17:10.553 "name": "NewBaseBdev", 00:17:10.553 "aliases": [ 00:17:10.553 "cfd547ae-5586-42ce-b51f-195971b7e1a1" 00:17:10.553 ], 00:17:10.553 "product_name": "Malloc disk", 00:17:10.553 "block_size": 512, 00:17:10.553 "num_blocks": 65536, 00:17:10.553 "uuid": "cfd547ae-5586-42ce-b51f-195971b7e1a1", 00:17:10.553 "assigned_rate_limits": { 00:17:10.553 "rw_ios_per_sec": 0, 00:17:10.553 "rw_mbytes_per_sec": 0, 00:17:10.553 "r_mbytes_per_sec": 0, 00:17:10.553 "w_mbytes_per_sec": 0 00:17:10.553 }, 00:17:10.553 "claimed": true, 00:17:10.553 "claim_type": "exclusive_write", 00:17:10.553 "zoned": false, 00:17:10.553 "supported_io_types": { 00:17:10.553 "read": true, 00:17:10.553 "write": true, 00:17:10.553 "unmap": true, 00:17:10.553 "flush": true, 00:17:10.553 "reset": true, 00:17:10.553 "nvme_admin": false, 00:17:10.553 "nvme_io": false, 00:17:10.553 "nvme_io_md": false, 00:17:10.553 "write_zeroes": true, 00:17:10.553 "zcopy": true, 00:17:10.553 "get_zone_info": false, 00:17:10.553 "zone_management": false, 00:17:10.553 "zone_append": false, 00:17:10.553 "compare": false, 00:17:10.553 "compare_and_write": false, 00:17:10.553 "abort": true, 00:17:10.553 "seek_hole": false, 00:17:10.553 "seek_data": false, 00:17:10.553 "copy": true, 00:17:10.553 "nvme_iov_md": false 00:17:10.553 }, 00:17:10.553 "memory_domains": [ 00:17:10.553 { 00:17:10.553 "dma_device_id": "system", 00:17:10.553 "dma_device_type": 1 00:17:10.553 }, 00:17:10.553 { 00:17:10.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.553 "dma_device_type": 2 00:17:10.553 } 00:17:10.553 ], 00:17:10.553 "driver_specific": {} 00:17:10.553 } 00:17:10.553 ] 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.553 13:16:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.812 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.812 "name": "Existed_Raid", 00:17:10.812 "uuid": "87277fb9-c90a-470c-9b36-71ac2d730930", 00:17:10.812 "strip_size_kb": 0, 00:17:10.812 "state": "online", 00:17:10.812 "raid_level": "raid1", 00:17:10.812 "superblock": true, 00:17:10.812 "num_base_bdevs": 3, 00:17:10.812 "num_base_bdevs_discovered": 3, 00:17:10.812 "num_base_bdevs_operational": 3, 00:17:10.812 "base_bdevs_list": [ 00:17:10.812 { 00:17:10.812 "name": "NewBaseBdev", 00:17:10.812 "uuid": "cfd547ae-5586-42ce-b51f-195971b7e1a1", 00:17:10.812 "is_configured": true, 00:17:10.812 "data_offset": 2048, 00:17:10.812 "data_size": 63488 00:17:10.812 }, 00:17:10.812 { 00:17:10.812 "name": "BaseBdev2", 00:17:10.812 "uuid": "9a30de59-79bd-4613-bead-21c484485912", 00:17:10.812 "is_configured": true, 00:17:10.812 "data_offset": 2048, 00:17:10.812 "data_size": 63488 00:17:10.812 }, 00:17:10.812 { 00:17:10.812 "name": "BaseBdev3", 00:17:10.812 "uuid": "53f1c1f2-3094-4c6d-b48c-5b746cf66d57", 00:17:10.812 "is_configured": true, 00:17:10.812 "data_offset": 2048, 00:17:10.812 "data_size": 63488 00:17:10.812 } 00:17:10.812 ] 00:17:10.812 }' 00:17:10.812 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.812 13:16:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.380 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:11.380 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:11.380 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:11.380 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:11.380 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:11.380 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:11.380 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:11.380 13:16:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:11.640 [2024-07-26 13:16:52.027676] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.640 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:11.640 "name": "Existed_Raid", 00:17:11.640 "aliases": [ 00:17:11.640 "87277fb9-c90a-470c-9b36-71ac2d730930" 00:17:11.640 ], 00:17:11.640 "product_name": "Raid Volume", 00:17:11.640 "block_size": 512, 00:17:11.640 "num_blocks": 63488, 00:17:11.640 "uuid": "87277fb9-c90a-470c-9b36-71ac2d730930", 00:17:11.640 "assigned_rate_limits": { 00:17:11.640 "rw_ios_per_sec": 0, 00:17:11.640 "rw_mbytes_per_sec": 0, 00:17:11.640 "r_mbytes_per_sec": 0, 00:17:11.640 "w_mbytes_per_sec": 0 00:17:11.640 }, 00:17:11.640 "claimed": false, 00:17:11.640 "zoned": false, 00:17:11.640 "supported_io_types": { 00:17:11.640 "read": true, 00:17:11.640 "write": true, 00:17:11.640 "unmap": false, 00:17:11.640 "flush": false, 00:17:11.640 "reset": true, 00:17:11.640 "nvme_admin": false, 00:17:11.640 "nvme_io": false, 00:17:11.640 "nvme_io_md": false, 00:17:11.640 "write_zeroes": true, 00:17:11.640 "zcopy": false, 00:17:11.640 "get_zone_info": false, 00:17:11.640 "zone_management": false, 00:17:11.640 "zone_append": false, 00:17:11.640 "compare": false, 00:17:11.640 "compare_and_write": false, 00:17:11.640 "abort": false, 00:17:11.640 "seek_hole": false, 00:17:11.640 "seek_data": false, 00:17:11.640 "copy": false, 00:17:11.640 "nvme_iov_md": false 00:17:11.640 }, 00:17:11.640 "memory_domains": [ 00:17:11.640 { 00:17:11.640 "dma_device_id": "system", 00:17:11.640 "dma_device_type": 1 00:17:11.640 }, 00:17:11.640 { 00:17:11.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.640 "dma_device_type": 2 00:17:11.640 }, 00:17:11.640 { 00:17:11.640 "dma_device_id": "system", 00:17:11.640 "dma_device_type": 1 00:17:11.640 }, 00:17:11.640 { 00:17:11.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.640 "dma_device_type": 2 00:17:11.640 }, 00:17:11.640 { 00:17:11.640 "dma_device_id": "system", 00:17:11.640 "dma_device_type": 1 00:17:11.640 }, 00:17:11.640 { 00:17:11.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.640 "dma_device_type": 2 00:17:11.640 } 00:17:11.640 ], 00:17:11.640 "driver_specific": { 00:17:11.640 "raid": { 00:17:11.640 "uuid": "87277fb9-c90a-470c-9b36-71ac2d730930", 00:17:11.640 "strip_size_kb": 0, 00:17:11.640 "state": "online", 00:17:11.640 "raid_level": "raid1", 00:17:11.640 "superblock": true, 00:17:11.640 "num_base_bdevs": 3, 00:17:11.640 "num_base_bdevs_discovered": 3, 00:17:11.640 "num_base_bdevs_operational": 3, 00:17:11.640 "base_bdevs_list": [ 00:17:11.640 { 00:17:11.640 "name": "NewBaseBdev", 00:17:11.640 "uuid": "cfd547ae-5586-42ce-b51f-195971b7e1a1", 00:17:11.640 "is_configured": true, 00:17:11.640 "data_offset": 2048, 00:17:11.640 "data_size": 63488 00:17:11.640 }, 00:17:11.640 { 00:17:11.640 "name": "BaseBdev2", 00:17:11.640 "uuid": "9a30de59-79bd-4613-bead-21c484485912", 00:17:11.640 "is_configured": true, 00:17:11.640 "data_offset": 2048, 00:17:11.640 "data_size": 63488 00:17:11.640 }, 00:17:11.640 { 00:17:11.640 "name": "BaseBdev3", 00:17:11.640 "uuid": "53f1c1f2-3094-4c6d-b48c-5b746cf66d57", 00:17:11.640 "is_configured": true, 00:17:11.640 "data_offset": 2048, 00:17:11.640 "data_size": 63488 00:17:11.640 } 00:17:11.640 ] 00:17:11.640 } 00:17:11.640 } 00:17:11.640 }' 00:17:11.640 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:11.640 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:11.640 BaseBdev2 00:17:11.640 BaseBdev3' 00:17:11.640 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:11.640 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:11.640 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:11.899 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:11.899 "name": "NewBaseBdev", 00:17:11.899 "aliases": [ 00:17:11.899 "cfd547ae-5586-42ce-b51f-195971b7e1a1" 00:17:11.899 ], 00:17:11.899 "product_name": "Malloc disk", 00:17:11.899 "block_size": 512, 00:17:11.899 "num_blocks": 65536, 00:17:11.899 "uuid": "cfd547ae-5586-42ce-b51f-195971b7e1a1", 00:17:11.899 "assigned_rate_limits": { 00:17:11.899 "rw_ios_per_sec": 0, 00:17:11.899 "rw_mbytes_per_sec": 0, 00:17:11.899 "r_mbytes_per_sec": 0, 00:17:11.899 "w_mbytes_per_sec": 0 00:17:11.899 }, 00:17:11.899 "claimed": true, 00:17:11.899 "claim_type": "exclusive_write", 00:17:11.899 "zoned": false, 00:17:11.899 "supported_io_types": { 00:17:11.899 "read": true, 00:17:11.899 "write": true, 00:17:11.899 "unmap": true, 00:17:11.899 "flush": true, 00:17:11.899 "reset": true, 00:17:11.899 "nvme_admin": false, 00:17:11.899 "nvme_io": false, 00:17:11.899 "nvme_io_md": false, 00:17:11.899 "write_zeroes": true, 00:17:11.899 "zcopy": true, 00:17:11.899 "get_zone_info": false, 00:17:11.899 "zone_management": false, 00:17:11.899 "zone_append": false, 00:17:11.899 "compare": false, 00:17:11.899 "compare_and_write": false, 00:17:11.899 "abort": true, 00:17:11.899 "seek_hole": false, 00:17:11.899 "seek_data": false, 00:17:11.899 "copy": true, 00:17:11.899 "nvme_iov_md": false 00:17:11.899 }, 00:17:11.899 "memory_domains": [ 00:17:11.899 { 00:17:11.899 "dma_device_id": "system", 00:17:11.899 "dma_device_type": 1 00:17:11.899 }, 00:17:11.899 { 00:17:11.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.899 "dma_device_type": 2 00:17:11.899 } 00:17:11.899 ], 00:17:11.899 "driver_specific": {} 00:17:11.899 }' 00:17:11.899 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:11.899 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:11.899 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:11.899 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:11.899 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.158 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:12.158 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.158 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.158 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:12.158 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.158 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.158 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:12.158 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.158 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:12.158 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:12.417 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:12.417 "name": "BaseBdev2", 00:17:12.417 "aliases": [ 00:17:12.417 "9a30de59-79bd-4613-bead-21c484485912" 00:17:12.417 ], 00:17:12.417 "product_name": "Malloc disk", 00:17:12.417 "block_size": 512, 00:17:12.417 "num_blocks": 65536, 00:17:12.417 "uuid": "9a30de59-79bd-4613-bead-21c484485912", 00:17:12.417 "assigned_rate_limits": { 00:17:12.417 "rw_ios_per_sec": 0, 00:17:12.417 "rw_mbytes_per_sec": 0, 00:17:12.417 "r_mbytes_per_sec": 0, 00:17:12.417 "w_mbytes_per_sec": 0 00:17:12.417 }, 00:17:12.417 "claimed": true, 00:17:12.417 "claim_type": "exclusive_write", 00:17:12.417 "zoned": false, 00:17:12.417 "supported_io_types": { 00:17:12.417 "read": true, 00:17:12.417 "write": true, 00:17:12.417 "unmap": true, 00:17:12.417 "flush": true, 00:17:12.417 "reset": true, 00:17:12.417 "nvme_admin": false, 00:17:12.417 "nvme_io": false, 00:17:12.417 "nvme_io_md": false, 00:17:12.417 "write_zeroes": true, 00:17:12.417 "zcopy": true, 00:17:12.417 "get_zone_info": false, 00:17:12.417 "zone_management": false, 00:17:12.417 "zone_append": false, 00:17:12.417 "compare": false, 00:17:12.417 "compare_and_write": false, 00:17:12.417 "abort": true, 00:17:12.417 "seek_hole": false, 00:17:12.417 "seek_data": false, 00:17:12.417 "copy": true, 00:17:12.417 "nvme_iov_md": false 00:17:12.417 }, 00:17:12.417 "memory_domains": [ 00:17:12.417 { 00:17:12.417 "dma_device_id": "system", 00:17:12.417 "dma_device_type": 1 00:17:12.417 }, 00:17:12.417 { 00:17:12.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.417 "dma_device_type": 2 00:17:12.417 } 00:17:12.417 ], 00:17:12.417 "driver_specific": {} 00:17:12.417 }' 00:17:12.417 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.417 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.676 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:12.676 13:16:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.676 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.676 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:12.676 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.676 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.676 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:12.676 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.676 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.934 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:12.934 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.934 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:12.934 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:12.934 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:12.934 "name": "BaseBdev3", 00:17:12.934 "aliases": [ 00:17:12.934 "53f1c1f2-3094-4c6d-b48c-5b746cf66d57" 00:17:12.934 ], 00:17:12.934 "product_name": "Malloc disk", 00:17:12.934 "block_size": 512, 00:17:12.934 "num_blocks": 65536, 00:17:12.934 "uuid": "53f1c1f2-3094-4c6d-b48c-5b746cf66d57", 00:17:12.934 "assigned_rate_limits": { 00:17:12.934 "rw_ios_per_sec": 0, 00:17:12.934 "rw_mbytes_per_sec": 0, 00:17:12.934 "r_mbytes_per_sec": 0, 00:17:12.934 "w_mbytes_per_sec": 0 00:17:12.934 }, 00:17:12.934 "claimed": true, 00:17:12.934 "claim_type": "exclusive_write", 00:17:12.934 "zoned": false, 00:17:12.934 "supported_io_types": { 00:17:12.934 "read": true, 00:17:12.934 "write": true, 00:17:12.934 "unmap": true, 00:17:12.934 "flush": true, 00:17:12.934 "reset": true, 00:17:12.934 "nvme_admin": false, 00:17:12.934 "nvme_io": false, 00:17:12.934 "nvme_io_md": false, 00:17:12.934 "write_zeroes": true, 00:17:12.934 "zcopy": true, 00:17:12.934 "get_zone_info": false, 00:17:12.934 "zone_management": false, 00:17:12.934 "zone_append": false, 00:17:12.934 "compare": false, 00:17:12.934 "compare_and_write": false, 00:17:12.934 "abort": true, 00:17:12.934 "seek_hole": false, 00:17:12.934 "seek_data": false, 00:17:12.934 "copy": true, 00:17:12.934 "nvme_iov_md": false 00:17:12.934 }, 00:17:12.934 "memory_domains": [ 00:17:12.934 { 00:17:12.934 "dma_device_id": "system", 00:17:12.934 "dma_device_type": 1 00:17:12.934 }, 00:17:12.934 { 00:17:12.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.934 "dma_device_type": 2 00:17:12.934 } 00:17:12.934 ], 00:17:12.934 "driver_specific": {} 00:17:12.934 }' 00:17:12.934 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.192 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.192 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.192 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.192 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.192 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.192 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.192 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.192 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.192 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.451 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.451 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.451 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:13.451 [2024-07-26 13:16:53.972535] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:13.451 [2024-07-26 13:16:53.972557] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.451 [2024-07-26 13:16:53.972602] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.451 [2024-07-26 13:16:53.972848] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.451 [2024-07-26 13:16:53.972865] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ec1650 name Existed_Raid, state offline 00:17:13.711 13:16:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 715436 00:17:13.711 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 715436 ']' 00:17:13.711 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 715436 00:17:13.711 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:13.711 13:16:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:13.711 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 715436 00:17:13.711 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:13.711 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:13.711 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 715436' 00:17:13.711 killing process with pid 715436 00:17:13.711 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 715436 00:17:13.711 [2024-07-26 13:16:54.048403] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:13.711 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 715436 00:17:13.711 [2024-07-26 13:16:54.072355] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.970 13:16:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:13.970 00:17:13.970 real 0m26.814s 00:17:13.970 user 0m49.231s 00:17:13.970 sys 0m4.762s 00:17:13.970 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:13.970 13:16:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.970 ************************************ 00:17:13.970 END TEST raid_state_function_test_sb 00:17:13.970 ************************************ 00:17:13.970 13:16:54 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:13.971 13:16:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:13.971 13:16:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:13.971 13:16:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.971 ************************************ 00:17:13.971 START TEST raid_superblock_test 00:17:13.971 ************************************ 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=720526 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 720526 /var/tmp/spdk-raid.sock 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 720526 ']' 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:13.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.971 13:16:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.971 [2024-07-26 13:16:54.409127] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:17:13.971 [2024-07-26 13:16:54.409189] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid720526 ] 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:13.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:13.971 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:14.231 [2024-07-26 13:16:54.542409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.231 [2024-07-26 13:16:54.626305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.231 [2024-07-26 13:16:54.680028] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.231 [2024-07-26 13:16:54.680056] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.798 13:16:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.798 13:16:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:14.798 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:17:14.798 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:14.798 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:17:14.798 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:17:14.798 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:14.798 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:14.798 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:14.798 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:14.798 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:15.057 malloc1 00:17:15.057 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:15.317 [2024-07-26 13:16:55.756066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:15.317 [2024-07-26 13:16:55.756113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.317 [2024-07-26 13:16:55.756129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13772f0 00:17:15.317 [2024-07-26 13:16:55.756146] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.317 [2024-07-26 13:16:55.757581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.317 [2024-07-26 13:16:55.757609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:15.317 pt1 00:17:15.317 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:15.317 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:15.317 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:17:15.317 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:17:15.317 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:15.317 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:15.317 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:15.317 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:15.317 13:16:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:15.576 malloc2 00:17:15.576 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:15.835 [2024-07-26 13:16:56.217496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:15.835 [2024-07-26 13:16:56.217534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.835 [2024-07-26 13:16:56.217549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13786d0 00:17:15.835 [2024-07-26 13:16:56.217560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.835 [2024-07-26 13:16:56.218909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.835 [2024-07-26 13:16:56.218936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:15.835 pt2 00:17:15.835 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:15.835 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:15.835 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:17:15.835 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:17:15.835 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:15.835 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:15.835 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:17:15.835 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:15.835 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:16.094 malloc3 00:17:16.094 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:16.352 [2024-07-26 13:16:56.666946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:16.352 [2024-07-26 13:16:56.666987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.352 [2024-07-26 13:16:56.667003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15116b0 00:17:16.353 [2024-07-26 13:16:56.667014] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.353 [2024-07-26 13:16:56.668297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.353 [2024-07-26 13:16:56.668325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:16.353 pt3 00:17:16.353 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:17:16.353 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:17:16.353 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:16.612 [2024-07-26 13:16:56.895574] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:16.612 [2024-07-26 13:16:56.896662] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.612 [2024-07-26 13:16:56.896712] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:16.612 [2024-07-26 13:16:56.896838] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1511cb0 00:17:16.612 [2024-07-26 13:16:56.896848] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:16.612 [2024-07-26 13:16:56.897029] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x136f5f0 00:17:16.612 [2024-07-26 13:16:56.897168] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1511cb0 00:17:16.612 [2024-07-26 13:16:56.897179] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1511cb0 00:17:16.612 [2024-07-26 13:16:56.897275] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.612 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:16.612 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:16.612 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:16.612 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:16.612 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:16.612 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:16.612 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.612 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.612 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.612 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.612 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.612 13:16:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.871 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.871 "name": "raid_bdev1", 00:17:16.871 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:16.871 "strip_size_kb": 0, 00:17:16.871 "state": "online", 00:17:16.871 "raid_level": "raid1", 00:17:16.871 "superblock": true, 00:17:16.871 "num_base_bdevs": 3, 00:17:16.871 "num_base_bdevs_discovered": 3, 00:17:16.871 "num_base_bdevs_operational": 3, 00:17:16.871 "base_bdevs_list": [ 00:17:16.871 { 00:17:16.871 "name": "pt1", 00:17:16.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:16.871 "is_configured": true, 00:17:16.871 "data_offset": 2048, 00:17:16.871 "data_size": 63488 00:17:16.871 }, 00:17:16.871 { 00:17:16.871 "name": "pt2", 00:17:16.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.871 "is_configured": true, 00:17:16.871 "data_offset": 2048, 00:17:16.871 "data_size": 63488 00:17:16.871 }, 00:17:16.871 { 00:17:16.871 "name": "pt3", 00:17:16.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.871 "is_configured": true, 00:17:16.871 "data_offset": 2048, 00:17:16.871 "data_size": 63488 00:17:16.871 } 00:17:16.871 ] 00:17:16.871 }' 00:17:16.871 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.871 13:16:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.438 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:17:17.438 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:17.438 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:17.438 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:17.438 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:17.438 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:17.438 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:17.438 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:17.438 [2024-07-26 13:16:57.906438] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:17.438 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:17.438 "name": "raid_bdev1", 00:17:17.438 "aliases": [ 00:17:17.438 "61806a0b-37d8-4cbf-9364-eb5774625bbe" 00:17:17.438 ], 00:17:17.438 "product_name": "Raid Volume", 00:17:17.438 "block_size": 512, 00:17:17.438 "num_blocks": 63488, 00:17:17.438 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:17.438 "assigned_rate_limits": { 00:17:17.438 "rw_ios_per_sec": 0, 00:17:17.438 "rw_mbytes_per_sec": 0, 00:17:17.438 "r_mbytes_per_sec": 0, 00:17:17.438 "w_mbytes_per_sec": 0 00:17:17.438 }, 00:17:17.438 "claimed": false, 00:17:17.438 "zoned": false, 00:17:17.438 "supported_io_types": { 00:17:17.438 "read": true, 00:17:17.438 "write": true, 00:17:17.438 "unmap": false, 00:17:17.438 "flush": false, 00:17:17.438 "reset": true, 00:17:17.438 "nvme_admin": false, 00:17:17.438 "nvme_io": false, 00:17:17.438 "nvme_io_md": false, 00:17:17.438 "write_zeroes": true, 00:17:17.438 "zcopy": false, 00:17:17.438 "get_zone_info": false, 00:17:17.438 "zone_management": false, 00:17:17.438 "zone_append": false, 00:17:17.438 "compare": false, 00:17:17.438 "compare_and_write": false, 00:17:17.438 "abort": false, 00:17:17.438 "seek_hole": false, 00:17:17.438 "seek_data": false, 00:17:17.438 "copy": false, 00:17:17.439 "nvme_iov_md": false 00:17:17.439 }, 00:17:17.439 "memory_domains": [ 00:17:17.439 { 00:17:17.439 "dma_device_id": "system", 00:17:17.439 "dma_device_type": 1 00:17:17.439 }, 00:17:17.439 { 00:17:17.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.439 "dma_device_type": 2 00:17:17.439 }, 00:17:17.439 { 00:17:17.439 "dma_device_id": "system", 00:17:17.439 "dma_device_type": 1 00:17:17.439 }, 00:17:17.439 { 00:17:17.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.439 "dma_device_type": 2 00:17:17.439 }, 00:17:17.439 { 00:17:17.439 "dma_device_id": "system", 00:17:17.439 "dma_device_type": 1 00:17:17.439 }, 00:17:17.439 { 00:17:17.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.439 "dma_device_type": 2 00:17:17.439 } 00:17:17.439 ], 00:17:17.439 "driver_specific": { 00:17:17.439 "raid": { 00:17:17.439 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:17.439 "strip_size_kb": 0, 00:17:17.439 "state": "online", 00:17:17.439 "raid_level": "raid1", 00:17:17.439 "superblock": true, 00:17:17.439 "num_base_bdevs": 3, 00:17:17.439 "num_base_bdevs_discovered": 3, 00:17:17.439 "num_base_bdevs_operational": 3, 00:17:17.439 "base_bdevs_list": [ 00:17:17.439 { 00:17:17.439 "name": "pt1", 00:17:17.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.439 "is_configured": true, 00:17:17.439 "data_offset": 2048, 00:17:17.439 "data_size": 63488 00:17:17.439 }, 00:17:17.439 { 00:17:17.439 "name": "pt2", 00:17:17.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.439 "is_configured": true, 00:17:17.439 "data_offset": 2048, 00:17:17.439 "data_size": 63488 00:17:17.439 }, 00:17:17.439 { 00:17:17.439 "name": "pt3", 00:17:17.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.439 "is_configured": true, 00:17:17.439 "data_offset": 2048, 00:17:17.439 "data_size": 63488 00:17:17.439 } 00:17:17.439 ] 00:17:17.439 } 00:17:17.439 } 00:17:17.439 }' 00:17:17.439 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:17.697 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:17.697 pt2 00:17:17.697 pt3' 00:17:17.697 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:17.697 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:17.697 13:16:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:17.697 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:17.697 "name": "pt1", 00:17:17.697 "aliases": [ 00:17:17.697 "00000000-0000-0000-0000-000000000001" 00:17:17.697 ], 00:17:17.697 "product_name": "passthru", 00:17:17.697 "block_size": 512, 00:17:17.698 "num_blocks": 65536, 00:17:17.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:17.698 "assigned_rate_limits": { 00:17:17.698 "rw_ios_per_sec": 0, 00:17:17.698 "rw_mbytes_per_sec": 0, 00:17:17.698 "r_mbytes_per_sec": 0, 00:17:17.698 "w_mbytes_per_sec": 0 00:17:17.698 }, 00:17:17.698 "claimed": true, 00:17:17.698 "claim_type": "exclusive_write", 00:17:17.698 "zoned": false, 00:17:17.698 "supported_io_types": { 00:17:17.698 "read": true, 00:17:17.698 "write": true, 00:17:17.698 "unmap": true, 00:17:17.698 "flush": true, 00:17:17.698 "reset": true, 00:17:17.698 "nvme_admin": false, 00:17:17.698 "nvme_io": false, 00:17:17.698 "nvme_io_md": false, 00:17:17.698 "write_zeroes": true, 00:17:17.698 "zcopy": true, 00:17:17.698 "get_zone_info": false, 00:17:17.698 "zone_management": false, 00:17:17.698 "zone_append": false, 00:17:17.698 "compare": false, 00:17:17.698 "compare_and_write": false, 00:17:17.698 "abort": true, 00:17:17.698 "seek_hole": false, 00:17:17.698 "seek_data": false, 00:17:17.698 "copy": true, 00:17:17.698 "nvme_iov_md": false 00:17:17.698 }, 00:17:17.698 "memory_domains": [ 00:17:17.698 { 00:17:17.698 "dma_device_id": "system", 00:17:17.698 "dma_device_type": 1 00:17:17.698 }, 00:17:17.698 { 00:17:17.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.698 "dma_device_type": 2 00:17:17.698 } 00:17:17.698 ], 00:17:17.698 "driver_specific": { 00:17:17.698 "passthru": { 00:17:17.698 "name": "pt1", 00:17:17.698 "base_bdev_name": "malloc1" 00:17:17.698 } 00:17:17.698 } 00:17:17.698 }' 00:17:17.698 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.956 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:17.956 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:17.956 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.956 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:17.956 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:17.956 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.216 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.217 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.217 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.217 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.217 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.217 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:18.217 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:18.217 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:18.508 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:18.508 "name": "pt2", 00:17:18.508 "aliases": [ 00:17:18.508 "00000000-0000-0000-0000-000000000002" 00:17:18.508 ], 00:17:18.508 "product_name": "passthru", 00:17:18.508 "block_size": 512, 00:17:18.508 "num_blocks": 65536, 00:17:18.508 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.508 "assigned_rate_limits": { 00:17:18.508 "rw_ios_per_sec": 0, 00:17:18.508 "rw_mbytes_per_sec": 0, 00:17:18.508 "r_mbytes_per_sec": 0, 00:17:18.508 "w_mbytes_per_sec": 0 00:17:18.508 }, 00:17:18.508 "claimed": true, 00:17:18.508 "claim_type": "exclusive_write", 00:17:18.508 "zoned": false, 00:17:18.508 "supported_io_types": { 00:17:18.508 "read": true, 00:17:18.508 "write": true, 00:17:18.508 "unmap": true, 00:17:18.508 "flush": true, 00:17:18.508 "reset": true, 00:17:18.508 "nvme_admin": false, 00:17:18.508 "nvme_io": false, 00:17:18.508 "nvme_io_md": false, 00:17:18.508 "write_zeroes": true, 00:17:18.508 "zcopy": true, 00:17:18.508 "get_zone_info": false, 00:17:18.508 "zone_management": false, 00:17:18.508 "zone_append": false, 00:17:18.508 "compare": false, 00:17:18.508 "compare_and_write": false, 00:17:18.508 "abort": true, 00:17:18.508 "seek_hole": false, 00:17:18.508 "seek_data": false, 00:17:18.508 "copy": true, 00:17:18.508 "nvme_iov_md": false 00:17:18.508 }, 00:17:18.508 "memory_domains": [ 00:17:18.508 { 00:17:18.508 "dma_device_id": "system", 00:17:18.508 "dma_device_type": 1 00:17:18.508 }, 00:17:18.508 { 00:17:18.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.508 "dma_device_type": 2 00:17:18.508 } 00:17:18.508 ], 00:17:18.508 "driver_specific": { 00:17:18.508 "passthru": { 00:17:18.508 "name": "pt2", 00:17:18.508 "base_bdev_name": "malloc2" 00:17:18.508 } 00:17:18.508 } 00:17:18.508 }' 00:17:18.508 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.508 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:18.508 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:18.508 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.508 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:18.508 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:18.508 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.508 13:16:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:18.508 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:18.508 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.767 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:18.767 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:18.767 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:18.767 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:18.767 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:19.025 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:19.025 "name": "pt3", 00:17:19.025 "aliases": [ 00:17:19.025 "00000000-0000-0000-0000-000000000003" 00:17:19.025 ], 00:17:19.025 "product_name": "passthru", 00:17:19.025 "block_size": 512, 00:17:19.025 "num_blocks": 65536, 00:17:19.025 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.025 "assigned_rate_limits": { 00:17:19.025 "rw_ios_per_sec": 0, 00:17:19.025 "rw_mbytes_per_sec": 0, 00:17:19.025 "r_mbytes_per_sec": 0, 00:17:19.025 "w_mbytes_per_sec": 0 00:17:19.025 }, 00:17:19.025 "claimed": true, 00:17:19.025 "claim_type": "exclusive_write", 00:17:19.025 "zoned": false, 00:17:19.025 "supported_io_types": { 00:17:19.025 "read": true, 00:17:19.025 "write": true, 00:17:19.025 "unmap": true, 00:17:19.025 "flush": true, 00:17:19.025 "reset": true, 00:17:19.025 "nvme_admin": false, 00:17:19.025 "nvme_io": false, 00:17:19.025 "nvme_io_md": false, 00:17:19.025 "write_zeroes": true, 00:17:19.025 "zcopy": true, 00:17:19.025 "get_zone_info": false, 00:17:19.025 "zone_management": false, 00:17:19.025 "zone_append": false, 00:17:19.025 "compare": false, 00:17:19.025 "compare_and_write": false, 00:17:19.025 "abort": true, 00:17:19.025 "seek_hole": false, 00:17:19.025 "seek_data": false, 00:17:19.025 "copy": true, 00:17:19.025 "nvme_iov_md": false 00:17:19.025 }, 00:17:19.025 "memory_domains": [ 00:17:19.025 { 00:17:19.025 "dma_device_id": "system", 00:17:19.025 "dma_device_type": 1 00:17:19.025 }, 00:17:19.025 { 00:17:19.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.025 "dma_device_type": 2 00:17:19.025 } 00:17:19.025 ], 00:17:19.025 "driver_specific": { 00:17:19.025 "passthru": { 00:17:19.025 "name": "pt3", 00:17:19.025 "base_bdev_name": "malloc3" 00:17:19.025 } 00:17:19.025 } 00:17:19.025 }' 00:17:19.025 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:19.025 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:19.025 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:19.025 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:19.025 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:19.283 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:19.283 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:19.283 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:19.283 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:19.283 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:19.283 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:19.283 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:19.283 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:19.283 13:16:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:17:19.851 [2024-07-26 13:17:00.256648] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.851 13:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=61806a0b-37d8-4cbf-9364-eb5774625bbe 00:17:19.851 13:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 61806a0b-37d8-4cbf-9364-eb5774625bbe ']' 00:17:19.851 13:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:20.110 [2024-07-26 13:17:00.489001] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.110 [2024-07-26 13:17:00.489024] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.110 [2024-07-26 13:17:00.489076] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.110 [2024-07-26 13:17:00.489149] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.110 [2024-07-26 13:17:00.489162] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1511cb0 name raid_bdev1, state offline 00:17:20.110 13:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.110 13:17:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:17:20.678 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:17:20.678 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:17:20.678 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:20.678 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:20.678 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:20.678 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:20.936 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:17:20.937 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:21.196 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:21.196 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:17:21.455 13:17:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:22.025 [2024-07-26 13:17:02.265734] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:22.025 [2024-07-26 13:17:02.266993] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:22.025 [2024-07-26 13:17:02.267034] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:22.025 [2024-07-26 13:17:02.267078] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:22.025 [2024-07-26 13:17:02.267116] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:22.025 [2024-07-26 13:17:02.267137] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:22.025 [2024-07-26 13:17:02.267162] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.025 [2024-07-26 13:17:02.267176] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1511cb0 name raid_bdev1, state configuring 00:17:22.025 request: 00:17:22.025 { 00:17:22.025 "name": "raid_bdev1", 00:17:22.025 "raid_level": "raid1", 00:17:22.025 "base_bdevs": [ 00:17:22.025 "malloc1", 00:17:22.025 "malloc2", 00:17:22.025 "malloc3" 00:17:22.025 ], 00:17:22.025 "superblock": false, 00:17:22.025 "method": "bdev_raid_create", 00:17:22.025 "req_id": 1 00:17:22.025 } 00:17:22.025 Got JSON-RPC error response 00:17:22.025 response: 00:17:22.025 { 00:17:22.025 "code": -17, 00:17:22.025 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:22.025 } 00:17:22.025 13:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:22.025 13:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:22.025 13:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:22.025 13:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:22.025 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.025 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:17:22.025 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:17:22.025 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:17:22.025 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:22.284 [2024-07-26 13:17:02.658711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:22.284 [2024-07-26 13:17:02.658753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.284 [2024-07-26 13:17:02.658771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x150ed00 00:17:22.284 [2024-07-26 13:17:02.658782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.284 [2024-07-26 13:17:02.660282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.284 [2024-07-26 13:17:02.660312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:22.284 [2024-07-26 13:17:02.660374] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:22.284 [2024-07-26 13:17:02.660401] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:22.284 pt1 00:17:22.284 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:22.284 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:22.284 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:22.284 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:22.284 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:22.284 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:22.284 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.284 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.284 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.284 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.284 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.284 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.543 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.543 "name": "raid_bdev1", 00:17:22.543 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:22.543 "strip_size_kb": 0, 00:17:22.543 "state": "configuring", 00:17:22.543 "raid_level": "raid1", 00:17:22.543 "superblock": true, 00:17:22.543 "num_base_bdevs": 3, 00:17:22.543 "num_base_bdevs_discovered": 1, 00:17:22.543 "num_base_bdevs_operational": 3, 00:17:22.543 "base_bdevs_list": [ 00:17:22.543 { 00:17:22.543 "name": "pt1", 00:17:22.543 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:22.543 "is_configured": true, 00:17:22.543 "data_offset": 2048, 00:17:22.543 "data_size": 63488 00:17:22.543 }, 00:17:22.543 { 00:17:22.543 "name": null, 00:17:22.543 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.543 "is_configured": false, 00:17:22.543 "data_offset": 2048, 00:17:22.543 "data_size": 63488 00:17:22.543 }, 00:17:22.543 { 00:17:22.543 "name": null, 00:17:22.543 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:22.543 "is_configured": false, 00:17:22.543 "data_offset": 2048, 00:17:22.543 "data_size": 63488 00:17:22.543 } 00:17:22.543 ] 00:17:22.543 }' 00:17:22.543 13:17:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.543 13:17:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.482 13:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:17:23.482 13:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:23.482 [2024-07-26 13:17:03.885950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:23.482 [2024-07-26 13:17:03.885999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.482 [2024-07-26 13:17:03.886019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1377790 00:17:23.482 [2024-07-26 13:17:03.886030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.482 [2024-07-26 13:17:03.886358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.482 [2024-07-26 13:17:03.886378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:23.482 [2024-07-26 13:17:03.886438] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:23.482 [2024-07-26 13:17:03.886455] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:23.482 pt2 00:17:23.482 13:17:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:23.741 [2024-07-26 13:17:04.118582] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:23.741 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:23.741 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:23.741 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:23.741 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:23.741 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:23.741 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:23.741 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.741 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.741 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.741 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.741 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.741 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.001 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.001 "name": "raid_bdev1", 00:17:24.001 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:24.001 "strip_size_kb": 0, 00:17:24.001 "state": "configuring", 00:17:24.001 "raid_level": "raid1", 00:17:24.001 "superblock": true, 00:17:24.001 "num_base_bdevs": 3, 00:17:24.001 "num_base_bdevs_discovered": 1, 00:17:24.001 "num_base_bdevs_operational": 3, 00:17:24.001 "base_bdevs_list": [ 00:17:24.001 { 00:17:24.001 "name": "pt1", 00:17:24.001 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:24.001 "is_configured": true, 00:17:24.001 "data_offset": 2048, 00:17:24.001 "data_size": 63488 00:17:24.001 }, 00:17:24.001 { 00:17:24.001 "name": null, 00:17:24.001 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.001 "is_configured": false, 00:17:24.001 "data_offset": 2048, 00:17:24.001 "data_size": 63488 00:17:24.001 }, 00:17:24.001 { 00:17:24.001 "name": null, 00:17:24.001 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:24.001 "is_configured": false, 00:17:24.001 "data_offset": 2048, 00:17:24.001 "data_size": 63488 00:17:24.001 } 00:17:24.001 ] 00:17:24.001 }' 00:17:24.001 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.001 13:17:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.569 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:17:24.569 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:24.569 13:17:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:24.828 [2024-07-26 13:17:05.145328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:24.828 [2024-07-26 13:17:05.145378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.828 [2024-07-26 13:17:05.145395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x136e9e0 00:17:24.828 [2024-07-26 13:17:05.145407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.828 [2024-07-26 13:17:05.145733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.828 [2024-07-26 13:17:05.145751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:24.828 [2024-07-26 13:17:05.145808] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:24.828 [2024-07-26 13:17:05.145825] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:24.828 pt2 00:17:24.828 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:24.828 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:24.828 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:25.087 [2024-07-26 13:17:05.373922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:25.087 [2024-07-26 13:17:05.373955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.087 [2024-07-26 13:17:05.373970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1370dc0 00:17:25.087 [2024-07-26 13:17:05.373981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.087 [2024-07-26 13:17:05.374255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.087 [2024-07-26 13:17:05.374273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:25.087 [2024-07-26 13:17:05.374320] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:25.087 [2024-07-26 13:17:05.374336] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:25.087 [2024-07-26 13:17:05.374435] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1370160 00:17:25.087 [2024-07-26 13:17:05.374445] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:25.087 [2024-07-26 13:17:05.374596] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1371050 00:17:25.087 [2024-07-26 13:17:05.374718] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1370160 00:17:25.087 [2024-07-26 13:17:05.374727] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1370160 00:17:25.087 [2024-07-26 13:17:05.374814] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.087 pt3 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.087 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.347 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.347 "name": "raid_bdev1", 00:17:25.347 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:25.347 "strip_size_kb": 0, 00:17:25.347 "state": "online", 00:17:25.347 "raid_level": "raid1", 00:17:25.347 "superblock": true, 00:17:25.347 "num_base_bdevs": 3, 00:17:25.347 "num_base_bdevs_discovered": 3, 00:17:25.347 "num_base_bdevs_operational": 3, 00:17:25.347 "base_bdevs_list": [ 00:17:25.347 { 00:17:25.347 "name": "pt1", 00:17:25.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.347 "is_configured": true, 00:17:25.347 "data_offset": 2048, 00:17:25.347 "data_size": 63488 00:17:25.347 }, 00:17:25.347 { 00:17:25.347 "name": "pt2", 00:17:25.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.347 "is_configured": true, 00:17:25.347 "data_offset": 2048, 00:17:25.347 "data_size": 63488 00:17:25.347 }, 00:17:25.347 { 00:17:25.347 "name": "pt3", 00:17:25.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:25.347 "is_configured": true, 00:17:25.347 "data_offset": 2048, 00:17:25.347 "data_size": 63488 00:17:25.347 } 00:17:25.347 ] 00:17:25.347 }' 00:17:25.347 13:17:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.347 13:17:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.285 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:17:26.285 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:26.285 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:26.285 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:26.285 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:26.285 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:26.285 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:26.285 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:26.285 [2024-07-26 13:17:06.701711] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.285 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:26.285 "name": "raid_bdev1", 00:17:26.285 "aliases": [ 00:17:26.285 "61806a0b-37d8-4cbf-9364-eb5774625bbe" 00:17:26.285 ], 00:17:26.285 "product_name": "Raid Volume", 00:17:26.285 "block_size": 512, 00:17:26.285 "num_blocks": 63488, 00:17:26.285 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:26.285 "assigned_rate_limits": { 00:17:26.285 "rw_ios_per_sec": 0, 00:17:26.285 "rw_mbytes_per_sec": 0, 00:17:26.285 "r_mbytes_per_sec": 0, 00:17:26.285 "w_mbytes_per_sec": 0 00:17:26.285 }, 00:17:26.285 "claimed": false, 00:17:26.285 "zoned": false, 00:17:26.285 "supported_io_types": { 00:17:26.285 "read": true, 00:17:26.286 "write": true, 00:17:26.286 "unmap": false, 00:17:26.286 "flush": false, 00:17:26.286 "reset": true, 00:17:26.286 "nvme_admin": false, 00:17:26.286 "nvme_io": false, 00:17:26.286 "nvme_io_md": false, 00:17:26.286 "write_zeroes": true, 00:17:26.286 "zcopy": false, 00:17:26.286 "get_zone_info": false, 00:17:26.286 "zone_management": false, 00:17:26.286 "zone_append": false, 00:17:26.286 "compare": false, 00:17:26.286 "compare_and_write": false, 00:17:26.286 "abort": false, 00:17:26.286 "seek_hole": false, 00:17:26.286 "seek_data": false, 00:17:26.286 "copy": false, 00:17:26.286 "nvme_iov_md": false 00:17:26.286 }, 00:17:26.286 "memory_domains": [ 00:17:26.286 { 00:17:26.286 "dma_device_id": "system", 00:17:26.286 "dma_device_type": 1 00:17:26.286 }, 00:17:26.286 { 00:17:26.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.286 "dma_device_type": 2 00:17:26.286 }, 00:17:26.286 { 00:17:26.286 "dma_device_id": "system", 00:17:26.286 "dma_device_type": 1 00:17:26.286 }, 00:17:26.286 { 00:17:26.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.286 "dma_device_type": 2 00:17:26.286 }, 00:17:26.286 { 00:17:26.286 "dma_device_id": "system", 00:17:26.286 "dma_device_type": 1 00:17:26.286 }, 00:17:26.286 { 00:17:26.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.286 "dma_device_type": 2 00:17:26.286 } 00:17:26.286 ], 00:17:26.286 "driver_specific": { 00:17:26.286 "raid": { 00:17:26.286 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:26.286 "strip_size_kb": 0, 00:17:26.286 "state": "online", 00:17:26.286 "raid_level": "raid1", 00:17:26.286 "superblock": true, 00:17:26.286 "num_base_bdevs": 3, 00:17:26.286 "num_base_bdevs_discovered": 3, 00:17:26.286 "num_base_bdevs_operational": 3, 00:17:26.286 "base_bdevs_list": [ 00:17:26.286 { 00:17:26.286 "name": "pt1", 00:17:26.286 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:26.286 "is_configured": true, 00:17:26.286 "data_offset": 2048, 00:17:26.286 "data_size": 63488 00:17:26.286 }, 00:17:26.286 { 00:17:26.286 "name": "pt2", 00:17:26.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:26.286 "is_configured": true, 00:17:26.286 "data_offset": 2048, 00:17:26.286 "data_size": 63488 00:17:26.286 }, 00:17:26.286 { 00:17:26.286 "name": "pt3", 00:17:26.286 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:26.286 "is_configured": true, 00:17:26.286 "data_offset": 2048, 00:17:26.286 "data_size": 63488 00:17:26.286 } 00:17:26.286 ] 00:17:26.286 } 00:17:26.286 } 00:17:26.286 }' 00:17:26.286 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.286 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:26.286 pt2 00:17:26.286 pt3' 00:17:26.286 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:26.286 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:26.286 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:26.546 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:26.546 "name": "pt1", 00:17:26.546 "aliases": [ 00:17:26.546 "00000000-0000-0000-0000-000000000001" 00:17:26.546 ], 00:17:26.546 "product_name": "passthru", 00:17:26.546 "block_size": 512, 00:17:26.546 "num_blocks": 65536, 00:17:26.546 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:26.546 "assigned_rate_limits": { 00:17:26.546 "rw_ios_per_sec": 0, 00:17:26.546 "rw_mbytes_per_sec": 0, 00:17:26.546 "r_mbytes_per_sec": 0, 00:17:26.546 "w_mbytes_per_sec": 0 00:17:26.546 }, 00:17:26.546 "claimed": true, 00:17:26.546 "claim_type": "exclusive_write", 00:17:26.546 "zoned": false, 00:17:26.546 "supported_io_types": { 00:17:26.546 "read": true, 00:17:26.546 "write": true, 00:17:26.546 "unmap": true, 00:17:26.546 "flush": true, 00:17:26.546 "reset": true, 00:17:26.546 "nvme_admin": false, 00:17:26.546 "nvme_io": false, 00:17:26.546 "nvme_io_md": false, 00:17:26.546 "write_zeroes": true, 00:17:26.546 "zcopy": true, 00:17:26.546 "get_zone_info": false, 00:17:26.546 "zone_management": false, 00:17:26.546 "zone_append": false, 00:17:26.546 "compare": false, 00:17:26.546 "compare_and_write": false, 00:17:26.546 "abort": true, 00:17:26.546 "seek_hole": false, 00:17:26.546 "seek_data": false, 00:17:26.546 "copy": true, 00:17:26.546 "nvme_iov_md": false 00:17:26.546 }, 00:17:26.546 "memory_domains": [ 00:17:26.546 { 00:17:26.546 "dma_device_id": "system", 00:17:26.546 "dma_device_type": 1 00:17:26.546 }, 00:17:26.546 { 00:17:26.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.546 "dma_device_type": 2 00:17:26.546 } 00:17:26.546 ], 00:17:26.546 "driver_specific": { 00:17:26.546 "passthru": { 00:17:26.546 "name": "pt1", 00:17:26.546 "base_bdev_name": "malloc1" 00:17:26.546 } 00:17:26.546 } 00:17:26.546 }' 00:17:26.546 13:17:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.546 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.805 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:26.805 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:26.805 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:26.805 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:26.805 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:26.805 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:26.805 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:26.805 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.805 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.065 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:27.065 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.065 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:27.065 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.065 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.065 "name": "pt2", 00:17:27.065 "aliases": [ 00:17:27.065 "00000000-0000-0000-0000-000000000002" 00:17:27.065 ], 00:17:27.065 "product_name": "passthru", 00:17:27.065 "block_size": 512, 00:17:27.065 "num_blocks": 65536, 00:17:27.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:27.065 "assigned_rate_limits": { 00:17:27.065 "rw_ios_per_sec": 0, 00:17:27.065 "rw_mbytes_per_sec": 0, 00:17:27.065 "r_mbytes_per_sec": 0, 00:17:27.065 "w_mbytes_per_sec": 0 00:17:27.065 }, 00:17:27.065 "claimed": true, 00:17:27.065 "claim_type": "exclusive_write", 00:17:27.065 "zoned": false, 00:17:27.065 "supported_io_types": { 00:17:27.065 "read": true, 00:17:27.065 "write": true, 00:17:27.065 "unmap": true, 00:17:27.065 "flush": true, 00:17:27.065 "reset": true, 00:17:27.065 "nvme_admin": false, 00:17:27.065 "nvme_io": false, 00:17:27.065 "nvme_io_md": false, 00:17:27.065 "write_zeroes": true, 00:17:27.065 "zcopy": true, 00:17:27.065 "get_zone_info": false, 00:17:27.065 "zone_management": false, 00:17:27.065 "zone_append": false, 00:17:27.065 "compare": false, 00:17:27.065 "compare_and_write": false, 00:17:27.065 "abort": true, 00:17:27.065 "seek_hole": false, 00:17:27.065 "seek_data": false, 00:17:27.065 "copy": true, 00:17:27.065 "nvme_iov_md": false 00:17:27.065 }, 00:17:27.065 "memory_domains": [ 00:17:27.065 { 00:17:27.065 "dma_device_id": "system", 00:17:27.065 "dma_device_type": 1 00:17:27.065 }, 00:17:27.065 { 00:17:27.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.065 "dma_device_type": 2 00:17:27.065 } 00:17:27.065 ], 00:17:27.065 "driver_specific": { 00:17:27.065 "passthru": { 00:17:27.065 "name": "pt2", 00:17:27.065 "base_bdev_name": "malloc2" 00:17:27.065 } 00:17:27.065 } 00:17:27.065 }' 00:17:27.065 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.324 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.324 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:27.324 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.324 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.324 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.324 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.324 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.324 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:27.324 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.583 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.583 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:27.583 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.583 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:27.583 13:17:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.842 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.842 "name": "pt3", 00:17:27.842 "aliases": [ 00:17:27.842 "00000000-0000-0000-0000-000000000003" 00:17:27.842 ], 00:17:27.842 "product_name": "passthru", 00:17:27.842 "block_size": 512, 00:17:27.842 "num_blocks": 65536, 00:17:27.842 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:27.842 "assigned_rate_limits": { 00:17:27.842 "rw_ios_per_sec": 0, 00:17:27.842 "rw_mbytes_per_sec": 0, 00:17:27.842 "r_mbytes_per_sec": 0, 00:17:27.842 "w_mbytes_per_sec": 0 00:17:27.842 }, 00:17:27.842 "claimed": true, 00:17:27.842 "claim_type": "exclusive_write", 00:17:27.842 "zoned": false, 00:17:27.842 "supported_io_types": { 00:17:27.842 "read": true, 00:17:27.842 "write": true, 00:17:27.842 "unmap": true, 00:17:27.842 "flush": true, 00:17:27.842 "reset": true, 00:17:27.842 "nvme_admin": false, 00:17:27.842 "nvme_io": false, 00:17:27.842 "nvme_io_md": false, 00:17:27.842 "write_zeroes": true, 00:17:27.842 "zcopy": true, 00:17:27.842 "get_zone_info": false, 00:17:27.842 "zone_management": false, 00:17:27.842 "zone_append": false, 00:17:27.842 "compare": false, 00:17:27.842 "compare_and_write": false, 00:17:27.842 "abort": true, 00:17:27.842 "seek_hole": false, 00:17:27.842 "seek_data": false, 00:17:27.842 "copy": true, 00:17:27.842 "nvme_iov_md": false 00:17:27.842 }, 00:17:27.842 "memory_domains": [ 00:17:27.842 { 00:17:27.842 "dma_device_id": "system", 00:17:27.842 "dma_device_type": 1 00:17:27.842 }, 00:17:27.842 { 00:17:27.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.842 "dma_device_type": 2 00:17:27.842 } 00:17:27.842 ], 00:17:27.842 "driver_specific": { 00:17:27.842 "passthru": { 00:17:27.842 "name": "pt3", 00:17:27.842 "base_bdev_name": "malloc3" 00:17:27.842 } 00:17:27.842 } 00:17:27.842 }' 00:17:27.842 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.842 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.842 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:27.842 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.842 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.842 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.842 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.842 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.100 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.100 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.100 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.100 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.100 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:28.100 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:17:28.359 [2024-07-26 13:17:08.694936] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.359 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 61806a0b-37d8-4cbf-9364-eb5774625bbe '!=' 61806a0b-37d8-4cbf-9364-eb5774625bbe ']' 00:17:28.359 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:17:28.359 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:28.359 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:28.359 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:28.618 [2024-07-26 13:17:08.927314] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:28.618 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:28.618 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:28.618 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:28.618 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:28.618 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:28.618 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:28.618 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.618 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.618 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.618 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.618 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.618 13:17:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.877 13:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.877 "name": "raid_bdev1", 00:17:28.877 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:28.877 "strip_size_kb": 0, 00:17:28.877 "state": "online", 00:17:28.877 "raid_level": "raid1", 00:17:28.877 "superblock": true, 00:17:28.877 "num_base_bdevs": 3, 00:17:28.877 "num_base_bdevs_discovered": 2, 00:17:28.877 "num_base_bdevs_operational": 2, 00:17:28.877 "base_bdevs_list": [ 00:17:28.877 { 00:17:28.877 "name": null, 00:17:28.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.877 "is_configured": false, 00:17:28.877 "data_offset": 2048, 00:17:28.877 "data_size": 63488 00:17:28.877 }, 00:17:28.877 { 00:17:28.877 "name": "pt2", 00:17:28.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:28.877 "is_configured": true, 00:17:28.877 "data_offset": 2048, 00:17:28.877 "data_size": 63488 00:17:28.877 }, 00:17:28.877 { 00:17:28.877 "name": "pt3", 00:17:28.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:28.877 "is_configured": true, 00:17:28.877 "data_offset": 2048, 00:17:28.877 "data_size": 63488 00:17:28.877 } 00:17:28.877 ] 00:17:28.877 }' 00:17:28.877 13:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.877 13:17:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.445 13:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:29.445 [2024-07-26 13:17:09.962028] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.445 [2024-07-26 13:17:09.962055] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.445 [2024-07-26 13:17:09.962109] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.445 [2024-07-26 13:17:09.962172] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.445 [2024-07-26 13:17:09.962184] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1370160 name raid_bdev1, state offline 00:17:29.704 13:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.704 13:17:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:17:29.704 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:17:29.704 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:17:29.704 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:17:29.704 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:29.704 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:29.962 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:29.962 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:29.962 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:30.221 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:17:30.221 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:17:30.221 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:17:30.221 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:17:30.221 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.481 [2024-07-26 13:17:10.876394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.481 [2024-07-26 13:17:10.876439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.481 [2024-07-26 13:17:10.876456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151ad50 00:17:30.481 [2024-07-26 13:17:10.876467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.481 [2024-07-26 13:17:10.877981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.481 [2024-07-26 13:17:10.878010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.481 [2024-07-26 13:17:10.878075] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:30.481 [2024-07-26 13:17:10.878108] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:30.481 pt2 00:17:30.481 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:30.481 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:30.481 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:30.481 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:30.481 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:30.481 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:30.481 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.481 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.481 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.481 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.481 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.481 13:17:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.740 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.740 "name": "raid_bdev1", 00:17:30.740 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:30.740 "strip_size_kb": 0, 00:17:30.740 "state": "configuring", 00:17:30.740 "raid_level": "raid1", 00:17:30.740 "superblock": true, 00:17:30.740 "num_base_bdevs": 3, 00:17:30.740 "num_base_bdevs_discovered": 1, 00:17:30.740 "num_base_bdevs_operational": 2, 00:17:30.740 "base_bdevs_list": [ 00:17:30.740 { 00:17:30.740 "name": null, 00:17:30.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.740 "is_configured": false, 00:17:30.740 "data_offset": 2048, 00:17:30.740 "data_size": 63488 00:17:30.740 }, 00:17:30.740 { 00:17:30.740 "name": "pt2", 00:17:30.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.740 "is_configured": true, 00:17:30.740 "data_offset": 2048, 00:17:30.740 "data_size": 63488 00:17:30.740 }, 00:17:30.740 { 00:17:30.740 "name": null, 00:17:30.740 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:30.740 "is_configured": false, 00:17:30.740 "data_offset": 2048, 00:17:30.740 "data_size": 63488 00:17:30.740 } 00:17:30.740 ] 00:17:30.740 }' 00:17:30.740 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.740 13:17:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.308 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:17:31.308 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:17:31.308 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:31.308 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:31.566 [2024-07-26 13:17:11.899091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:31.567 [2024-07-26 13:17:11.899147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.567 [2024-07-26 13:17:11.899164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1510910 00:17:31.567 [2024-07-26 13:17:11.899175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.567 [2024-07-26 13:17:11.899505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.567 [2024-07-26 13:17:11.899524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:31.567 [2024-07-26 13:17:11.899586] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:31.567 [2024-07-26 13:17:11.899605] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:31.567 [2024-07-26 13:17:11.899699] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1370c20 00:17:31.567 [2024-07-26 13:17:11.899708] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:31.567 [2024-07-26 13:17:11.899873] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1510170 00:17:31.567 [2024-07-26 13:17:11.899997] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1370c20 00:17:31.567 [2024-07-26 13:17:11.900006] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1370c20 00:17:31.567 [2024-07-26 13:17:11.900097] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.567 pt3 00:17:31.567 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:31.567 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:31.567 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:31.567 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:31.567 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:31.567 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:31.567 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.567 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.567 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.567 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:31.567 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.567 13:17:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.825 13:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.825 "name": "raid_bdev1", 00:17:31.825 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:31.825 "strip_size_kb": 0, 00:17:31.825 "state": "online", 00:17:31.825 "raid_level": "raid1", 00:17:31.825 "superblock": true, 00:17:31.825 "num_base_bdevs": 3, 00:17:31.825 "num_base_bdevs_discovered": 2, 00:17:31.826 "num_base_bdevs_operational": 2, 00:17:31.826 "base_bdevs_list": [ 00:17:31.826 { 00:17:31.826 "name": null, 00:17:31.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.826 "is_configured": false, 00:17:31.826 "data_offset": 2048, 00:17:31.826 "data_size": 63488 00:17:31.826 }, 00:17:31.826 { 00:17:31.826 "name": "pt2", 00:17:31.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:31.826 "is_configured": true, 00:17:31.826 "data_offset": 2048, 00:17:31.826 "data_size": 63488 00:17:31.826 }, 00:17:31.826 { 00:17:31.826 "name": "pt3", 00:17:31.826 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:31.826 "is_configured": true, 00:17:31.826 "data_offset": 2048, 00:17:31.826 "data_size": 63488 00:17:31.826 } 00:17:31.826 ] 00:17:31.826 }' 00:17:31.826 13:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.826 13:17:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.448 13:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:32.448 [2024-07-26 13:17:12.933801] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.448 [2024-07-26 13:17:12.933826] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.448 [2024-07-26 13:17:12.933877] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.448 [2024-07-26 13:17:12.933933] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.449 [2024-07-26 13:17:12.933944] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1370c20 name raid_bdev1, state offline 00:17:32.449 13:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:17:32.449 13:17:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.707 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:17:32.707 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:17:32.707 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:17:32.708 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:17:32.708 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:32.966 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.231 [2024-07-26 13:17:13.619565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.231 [2024-07-26 13:17:13.619606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.232 [2024-07-26 13:17:13.619625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1377520 00:17:33.232 [2024-07-26 13:17:13.619636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.232 [2024-07-26 13:17:13.621128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.232 [2024-07-26 13:17:13.621164] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.232 [2024-07-26 13:17:13.621224] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:33.232 [2024-07-26 13:17:13.621249] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.232 [2024-07-26 13:17:13.621340] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:33.232 [2024-07-26 13:17:13.621352] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.232 [2024-07-26 13:17:13.621365] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x150fc60 name raid_bdev1, state configuring 00:17:33.232 [2024-07-26 13:17:13.621386] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.232 pt1 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.232 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.503 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.503 "name": "raid_bdev1", 00:17:33.503 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:33.503 "strip_size_kb": 0, 00:17:33.503 "state": "configuring", 00:17:33.503 "raid_level": "raid1", 00:17:33.503 "superblock": true, 00:17:33.503 "num_base_bdevs": 3, 00:17:33.503 "num_base_bdevs_discovered": 1, 00:17:33.503 "num_base_bdevs_operational": 2, 00:17:33.503 "base_bdevs_list": [ 00:17:33.503 { 00:17:33.503 "name": null, 00:17:33.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.503 "is_configured": false, 00:17:33.503 "data_offset": 2048, 00:17:33.503 "data_size": 63488 00:17:33.503 }, 00:17:33.503 { 00:17:33.503 "name": "pt2", 00:17:33.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.503 "is_configured": true, 00:17:33.503 "data_offset": 2048, 00:17:33.503 "data_size": 63488 00:17:33.503 }, 00:17:33.503 { 00:17:33.503 "name": null, 00:17:33.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:33.503 "is_configured": false, 00:17:33.503 "data_offset": 2048, 00:17:33.503 "data_size": 63488 00:17:33.503 } 00:17:33.503 ] 00:17:33.503 }' 00:17:33.503 13:17:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.503 13:17:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.071 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:34.071 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:17:34.330 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:17:34.330 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:34.589 [2024-07-26 13:17:14.874889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:34.589 [2024-07-26 13:17:14.874938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.589 [2024-07-26 13:17:14.874955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1370c20 00:17:34.589 [2024-07-26 13:17:14.874967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.589 [2024-07-26 13:17:14.875288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.589 [2024-07-26 13:17:14.875306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:34.589 [2024-07-26 13:17:14.875364] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:34.589 [2024-07-26 13:17:14.875384] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:34.589 [2024-07-26 13:17:14.875478] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1370560 00:17:34.589 [2024-07-26 13:17:14.875489] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:34.589 [2024-07-26 13:17:14.875645] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1372520 00:17:34.589 [2024-07-26 13:17:14.875761] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1370560 00:17:34.589 [2024-07-26 13:17:14.875770] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1370560 00:17:34.589 [2024-07-26 13:17:14.875858] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.589 pt3 00:17:34.589 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.589 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:34.589 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:34.589 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.589 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.589 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:34.589 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.589 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.589 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.589 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.589 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.589 13:17:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.848 13:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.848 "name": "raid_bdev1", 00:17:34.848 "uuid": "61806a0b-37d8-4cbf-9364-eb5774625bbe", 00:17:34.848 "strip_size_kb": 0, 00:17:34.848 "state": "online", 00:17:34.848 "raid_level": "raid1", 00:17:34.848 "superblock": true, 00:17:34.848 "num_base_bdevs": 3, 00:17:34.848 "num_base_bdevs_discovered": 2, 00:17:34.848 "num_base_bdevs_operational": 2, 00:17:34.848 "base_bdevs_list": [ 00:17:34.848 { 00:17:34.848 "name": null, 00:17:34.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.848 "is_configured": false, 00:17:34.848 "data_offset": 2048, 00:17:34.848 "data_size": 63488 00:17:34.848 }, 00:17:34.848 { 00:17:34.848 "name": "pt2", 00:17:34.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.848 "is_configured": true, 00:17:34.848 "data_offset": 2048, 00:17:34.848 "data_size": 63488 00:17:34.848 }, 00:17:34.848 { 00:17:34.848 "name": "pt3", 00:17:34.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:34.848 "is_configured": true, 00:17:34.848 "data_offset": 2048, 00:17:34.848 "data_size": 63488 00:17:34.848 } 00:17:34.848 ] 00:17:34.848 }' 00:17:34.848 13:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.848 13:17:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.416 13:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:35.416 13:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:35.416 13:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:17:35.416 13:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:35.416 13:17:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:17:35.675 [2024-07-26 13:17:16.142455] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.675 13:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 61806a0b-37d8-4cbf-9364-eb5774625bbe '!=' 61806a0b-37d8-4cbf-9364-eb5774625bbe ']' 00:17:35.675 13:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 720526 00:17:35.675 13:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 720526 ']' 00:17:35.675 13:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 720526 00:17:35.675 13:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:35.675 13:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:35.675 13:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 720526 00:17:35.934 13:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:35.934 13:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:35.934 13:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 720526' 00:17:35.934 killing process with pid 720526 00:17:35.934 13:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 720526 00:17:35.934 [2024-07-26 13:17:16.221673] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:35.934 [2024-07-26 13:17:16.221724] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.934 [2024-07-26 13:17:16.221771] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.934 [2024-07-26 13:17:16.221782] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1370560 name raid_bdev1, state offline 00:17:35.934 13:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 720526 00:17:35.934 [2024-07-26 13:17:16.246065] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.934 13:17:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:17:35.934 00:17:35.934 real 0m22.088s 00:17:35.934 user 0m40.416s 00:17:35.934 sys 0m3.912s 00:17:35.934 13:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:35.934 13:17:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.934 ************************************ 00:17:35.934 END TEST raid_superblock_test 00:17:35.934 ************************************ 00:17:36.194 13:17:16 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:17:36.194 13:17:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:36.194 13:17:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:36.194 13:17:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:36.194 ************************************ 00:17:36.194 START TEST raid_read_error_test 00:17:36.194 ************************************ 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.zMCqhKLhNw 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=724721 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 724721 /var/tmp/spdk-raid.sock 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 724721 ']' 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:36.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:36.194 13:17:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.194 [2024-07-26 13:17:16.595615] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:17:36.194 [2024-07-26 13:17:16.595678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid724721 ] 00:17:36.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.194 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:36.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.194 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:36.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.194 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:36.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.194 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:36.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.194 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:36.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.194 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:36.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.194 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:36.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.194 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:36.195 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:36.195 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:36.454 [2024-07-26 13:17:16.727298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.454 [2024-07-26 13:17:16.810557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.454 [2024-07-26 13:17:16.873437] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.454 [2024-07-26 13:17:16.873477] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.021 13:17:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.021 13:17:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:37.021 13:17:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:37.021 13:17:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:37.280 BaseBdev1_malloc 00:17:37.280 13:17:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:37.539 true 00:17:37.539 13:17:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:37.798 [2024-07-26 13:17:18.162995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:37.798 [2024-07-26 13:17:18.163037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.798 [2024-07-26 13:17:18.163056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bde190 00:17:37.798 [2024-07-26 13:17:18.163067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.798 [2024-07-26 13:17:18.164649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.798 [2024-07-26 13:17:18.164678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:37.798 BaseBdev1 00:17:37.798 13:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:37.798 13:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:38.058 BaseBdev2_malloc 00:17:38.058 13:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:38.317 true 00:17:38.317 13:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:38.576 [2024-07-26 13:17:18.849182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:38.576 [2024-07-26 13:17:18.849223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.576 [2024-07-26 13:17:18.849242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be2e20 00:17:38.576 [2024-07-26 13:17:18.849254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.576 [2024-07-26 13:17:18.850580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.576 [2024-07-26 13:17:18.850606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:38.576 BaseBdev2 00:17:38.576 13:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:38.576 13:17:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:38.576 BaseBdev3_malloc 00:17:38.835 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:38.835 true 00:17:38.835 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:39.095 [2024-07-26 13:17:19.567444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:39.095 [2024-07-26 13:17:19.567481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.095 [2024-07-26 13:17:19.567499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be3d90 00:17:39.095 [2024-07-26 13:17:19.567510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.095 [2024-07-26 13:17:19.568779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.095 [2024-07-26 13:17:19.568808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:39.095 BaseBdev3 00:17:39.095 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:39.354 [2024-07-26 13:17:19.804099] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.354 [2024-07-26 13:17:19.805295] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.354 [2024-07-26 13:17:19.805359] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:39.354 [2024-07-26 13:17:19.805533] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1be5ba0 00:17:39.354 [2024-07-26 13:17:19.805544] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:39.354 [2024-07-26 13:17:19.805724] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1be5820 00:17:39.354 [2024-07-26 13:17:19.805866] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1be5ba0 00:17:39.354 [2024-07-26 13:17:19.805875] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1be5ba0 00:17:39.354 [2024-07-26 13:17:19.805985] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.354 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:39.354 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:39.354 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:39.354 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:39.354 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:39.354 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:39.354 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:39.354 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:39.354 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:39.354 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:39.354 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.354 13:17:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.614 13:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:39.614 "name": "raid_bdev1", 00:17:39.614 "uuid": "dff975ee-7fde-4d36-931a-8e89761e3d98", 00:17:39.614 "strip_size_kb": 0, 00:17:39.614 "state": "online", 00:17:39.614 "raid_level": "raid1", 00:17:39.614 "superblock": true, 00:17:39.614 "num_base_bdevs": 3, 00:17:39.614 "num_base_bdevs_discovered": 3, 00:17:39.614 "num_base_bdevs_operational": 3, 00:17:39.614 "base_bdevs_list": [ 00:17:39.614 { 00:17:39.614 "name": "BaseBdev1", 00:17:39.614 "uuid": "0b9116c4-6ccc-564e-bf24-b0e31e69cc02", 00:17:39.614 "is_configured": true, 00:17:39.614 "data_offset": 2048, 00:17:39.614 "data_size": 63488 00:17:39.614 }, 00:17:39.614 { 00:17:39.614 "name": "BaseBdev2", 00:17:39.614 "uuid": "43417b8e-c405-53b0-ac41-45b4cf9253b0", 00:17:39.614 "is_configured": true, 00:17:39.614 "data_offset": 2048, 00:17:39.614 "data_size": 63488 00:17:39.614 }, 00:17:39.614 { 00:17:39.614 "name": "BaseBdev3", 00:17:39.614 "uuid": "e111b7ae-3e77-5bc3-8137-99a0dfbeb5c9", 00:17:39.614 "is_configured": true, 00:17:39.614 "data_offset": 2048, 00:17:39.614 "data_size": 63488 00:17:39.614 } 00:17:39.614 ] 00:17:39.614 }' 00:17:39.614 13:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:39.614 13:17:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.182 13:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:17:40.182 13:17:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:40.441 [2024-07-26 13:17:20.718741] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1be6ff0 00:17:41.379 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:41.379 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:17:41.379 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:41.379 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:17:41.379 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:17:41.379 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:41.380 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:41.380 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:41.380 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:41.380 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:41.380 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:41.380 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.380 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.380 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.380 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.380 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.380 13:17:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.639 13:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.639 "name": "raid_bdev1", 00:17:41.639 "uuid": "dff975ee-7fde-4d36-931a-8e89761e3d98", 00:17:41.639 "strip_size_kb": 0, 00:17:41.639 "state": "online", 00:17:41.639 "raid_level": "raid1", 00:17:41.639 "superblock": true, 00:17:41.639 "num_base_bdevs": 3, 00:17:41.639 "num_base_bdevs_discovered": 3, 00:17:41.639 "num_base_bdevs_operational": 3, 00:17:41.639 "base_bdevs_list": [ 00:17:41.639 { 00:17:41.639 "name": "BaseBdev1", 00:17:41.639 "uuid": "0b9116c4-6ccc-564e-bf24-b0e31e69cc02", 00:17:41.639 "is_configured": true, 00:17:41.639 "data_offset": 2048, 00:17:41.639 "data_size": 63488 00:17:41.639 }, 00:17:41.639 { 00:17:41.639 "name": "BaseBdev2", 00:17:41.639 "uuid": "43417b8e-c405-53b0-ac41-45b4cf9253b0", 00:17:41.639 "is_configured": true, 00:17:41.639 "data_offset": 2048, 00:17:41.639 "data_size": 63488 00:17:41.639 }, 00:17:41.639 { 00:17:41.639 "name": "BaseBdev3", 00:17:41.639 "uuid": "e111b7ae-3e77-5bc3-8137-99a0dfbeb5c9", 00:17:41.639 "is_configured": true, 00:17:41.639 "data_offset": 2048, 00:17:41.639 "data_size": 63488 00:17:41.639 } 00:17:41.639 ] 00:17:41.639 }' 00:17:41.639 13:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.639 13:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.208 13:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:42.467 [2024-07-26 13:17:22.873477] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.467 [2024-07-26 13:17:22.873510] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.467 [2024-07-26 13:17:22.876434] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.467 [2024-07-26 13:17:22.876465] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.467 [2024-07-26 13:17:22.876553] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.467 [2024-07-26 13:17:22.876564] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1be5ba0 name raid_bdev1, state offline 00:17:42.467 0 00:17:42.467 13:17:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 724721 00:17:42.467 13:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 724721 ']' 00:17:42.467 13:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 724721 00:17:42.467 13:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:17:42.467 13:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:42.467 13:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 724721 00:17:42.467 13:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:42.467 13:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:42.467 13:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 724721' 00:17:42.467 killing process with pid 724721 00:17:42.467 13:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 724721 00:17:42.467 [2024-07-26 13:17:22.949967] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.467 13:17:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 724721 00:17:42.467 [2024-07-26 13:17:22.968880] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:42.727 13:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.zMCqhKLhNw 00:17:42.727 13:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:17:42.727 13:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:17:42.727 13:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:17:42.727 13:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:17:42.727 13:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:42.727 13:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:42.727 13:17:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:42.727 00:17:42.727 real 0m6.652s 00:17:42.727 user 0m10.460s 00:17:42.727 sys 0m1.201s 00:17:42.727 13:17:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:42.727 13:17:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.727 ************************************ 00:17:42.727 END TEST raid_read_error_test 00:17:42.727 ************************************ 00:17:42.727 13:17:23 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:17:42.727 13:17:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:42.727 13:17:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:42.727 13:17:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:42.986 ************************************ 00:17:42.986 START TEST raid_write_error_test 00:17:42.986 ************************************ 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:17:42.986 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:17:42.987 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:17:42.987 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:17:42.987 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.02ji99m7DZ 00:17:42.987 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=726087 00:17:42.987 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 726087 /var/tmp/spdk-raid.sock 00:17:42.987 13:17:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:42.987 13:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 726087 ']' 00:17:42.987 13:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:42.987 13:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:42.987 13:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:42.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:42.987 13:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:42.987 13:17:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.987 [2024-07-26 13:17:23.331714] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:17:42.987 [2024-07-26 13:17:23.331771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid726087 ] 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:42.987 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:42.987 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:42.987 [2024-07-26 13:17:23.464501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.246 [2024-07-26 13:17:23.546984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.246 [2024-07-26 13:17:23.607416] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.246 [2024-07-26 13:17:23.607461] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:43.813 13:17:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.813 13:17:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:17:43.813 13:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:43.813 13:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:44.072 BaseBdev1_malloc 00:17:44.072 13:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:44.331 true 00:17:44.331 13:17:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:44.616 [2024-07-26 13:17:25.142118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:44.616 [2024-07-26 13:17:25.142167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.616 [2024-07-26 13:17:25.142185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfcf190 00:17:44.616 [2024-07-26 13:17:25.142197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.875 [2024-07-26 13:17:25.143819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.875 [2024-07-26 13:17:25.143849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:44.875 BaseBdev1 00:17:44.875 13:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:44.875 13:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:44.875 BaseBdev2_malloc 00:17:44.875 13:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:45.443 true 00:17:45.443 13:17:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:45.701 [2024-07-26 13:17:26.101289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:45.701 [2024-07-26 13:17:26.101331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.701 [2024-07-26 13:17:26.101349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfd3e20 00:17:45.701 [2024-07-26 13:17:26.101361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.701 [2024-07-26 13:17:26.102758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.701 [2024-07-26 13:17:26.102787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:45.701 BaseBdev2 00:17:45.701 13:17:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:17:45.701 13:17:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:45.960 BaseBdev3_malloc 00:17:45.960 13:17:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:46.219 true 00:17:46.219 13:17:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:46.518 [2024-07-26 13:17:26.787376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:46.518 [2024-07-26 13:17:26.787420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.518 [2024-07-26 13:17:26.787442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfd4d90 00:17:46.518 [2024-07-26 13:17:26.787453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.518 [2024-07-26 13:17:26.788859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.518 [2024-07-26 13:17:26.788887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:46.518 BaseBdev3 00:17:46.518 13:17:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:46.778 [2024-07-26 13:17:27.016012] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.778 [2024-07-26 13:17:27.017215] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.778 [2024-07-26 13:17:27.017284] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.778 [2024-07-26 13:17:27.017464] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xfd6ba0 00:17:46.778 [2024-07-26 13:17:27.017475] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:46.778 [2024-07-26 13:17:27.017667] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfd6820 00:17:46.778 [2024-07-26 13:17:27.017813] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfd6ba0 00:17:46.778 [2024-07-26 13:17:27.017823] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xfd6ba0 00:17:46.778 [2024-07-26 13:17:27.017937] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.778 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.778 "name": "raid_bdev1", 00:17:46.778 "uuid": "83020bb2-97a1-4112-8085-21d88aa313aa", 00:17:46.778 "strip_size_kb": 0, 00:17:46.778 "state": "online", 00:17:46.778 "raid_level": "raid1", 00:17:46.778 "superblock": true, 00:17:46.778 "num_base_bdevs": 3, 00:17:46.778 "num_base_bdevs_discovered": 3, 00:17:46.778 "num_base_bdevs_operational": 3, 00:17:46.779 "base_bdevs_list": [ 00:17:46.779 { 00:17:46.779 "name": "BaseBdev1", 00:17:46.779 "uuid": "3dccc96e-097a-522b-915c-f34fa1baacc6", 00:17:46.779 "is_configured": true, 00:17:46.779 "data_offset": 2048, 00:17:46.779 "data_size": 63488 00:17:46.779 }, 00:17:46.779 { 00:17:46.779 "name": "BaseBdev2", 00:17:46.779 "uuid": "1ab35a30-c81d-5ff4-8573-eb3d67750c76", 00:17:46.779 "is_configured": true, 00:17:46.779 "data_offset": 2048, 00:17:46.779 "data_size": 63488 00:17:46.779 }, 00:17:46.779 { 00:17:46.779 "name": "BaseBdev3", 00:17:46.779 "uuid": "00f73a52-0339-5a52-b261-b1c79eb7f15f", 00:17:46.779 "is_configured": true, 00:17:46.779 "data_offset": 2048, 00:17:46.779 "data_size": 63488 00:17:46.779 } 00:17:46.779 ] 00:17:46.779 }' 00:17:46.779 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.779 13:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.346 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:17:47.346 13:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:47.606 [2024-07-26 13:17:27.938677] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfd7ff0 00:17:48.543 13:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:48.543 [2024-07-26 13:17:29.053502] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:48.543 [2024-07-26 13:17:29.053560] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.543 [2024-07-26 13:17:29.053754] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xfd7ff0 00:17:48.801 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:17:48.801 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:48.801 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:17:48.801 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=2 00:17:48.801 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.801 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:48.801 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:48.801 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:48.801 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:48.801 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:48.801 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.801 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.802 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.802 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.802 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.802 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.802 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.802 "name": "raid_bdev1", 00:17:48.802 "uuid": "83020bb2-97a1-4112-8085-21d88aa313aa", 00:17:48.802 "strip_size_kb": 0, 00:17:48.802 "state": "online", 00:17:48.802 "raid_level": "raid1", 00:17:48.802 "superblock": true, 00:17:48.802 "num_base_bdevs": 3, 00:17:48.802 "num_base_bdevs_discovered": 2, 00:17:48.802 "num_base_bdevs_operational": 2, 00:17:48.802 "base_bdevs_list": [ 00:17:48.802 { 00:17:48.802 "name": null, 00:17:48.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.802 "is_configured": false, 00:17:48.802 "data_offset": 2048, 00:17:48.802 "data_size": 63488 00:17:48.802 }, 00:17:48.802 { 00:17:48.802 "name": "BaseBdev2", 00:17:48.802 "uuid": "1ab35a30-c81d-5ff4-8573-eb3d67750c76", 00:17:48.802 "is_configured": true, 00:17:48.802 "data_offset": 2048, 00:17:48.802 "data_size": 63488 00:17:48.802 }, 00:17:48.802 { 00:17:48.802 "name": "BaseBdev3", 00:17:48.802 "uuid": "00f73a52-0339-5a52-b261-b1c79eb7f15f", 00:17:48.802 "is_configured": true, 00:17:48.802 "data_offset": 2048, 00:17:48.802 "data_size": 63488 00:17:48.802 } 00:17:48.802 ] 00:17:48.802 }' 00:17:48.802 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.802 13:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.370 13:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:49.630 [2024-07-26 13:17:30.082770] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.630 [2024-07-26 13:17:30.082805] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.630 [2024-07-26 13:17:30.085687] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.630 [2024-07-26 13:17:30.085717] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.630 [2024-07-26 13:17:30.085783] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.630 [2024-07-26 13:17:30.085794] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfd6ba0 name raid_bdev1, state offline 00:17:49.630 0 00:17:49.630 13:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 726087 00:17:49.630 13:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 726087 ']' 00:17:49.630 13:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 726087 00:17:49.630 13:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:17:49.630 13:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:49.630 13:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 726087 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 726087' 00:17:49.890 killing process with pid 726087 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 726087 00:17:49.890 [2024-07-26 13:17:30.160965] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 726087 00:17:49.890 [2024-07-26 13:17:30.179386] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.02ji99m7DZ 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:49.890 00:17:49.890 real 0m7.126s 00:17:49.890 user 0m11.369s 00:17:49.890 sys 0m1.244s 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:49.890 13:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.890 ************************************ 00:17:49.890 END TEST raid_write_error_test 00:17:49.890 ************************************ 00:17:50.150 13:17:30 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:17:50.150 13:17:30 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:17:50.150 13:17:30 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:50.150 13:17:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:50.150 13:17:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.150 13:17:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.150 ************************************ 00:17:50.150 START TEST raid_state_function_test 00:17:50.150 ************************************ 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=727263 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 727263' 00:17:50.150 Process raid pid: 727263 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 727263 /var/tmp/spdk-raid.sock 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 727263 ']' 00:17:50.150 13:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:50.151 13:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:50.151 13:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:50.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:50.151 13:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:50.151 13:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.151 [2024-07-26 13:17:30.536923] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:17:50.151 [2024-07-26 13:17:30.536979] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:50.151 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:50.151 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:50.151 [2024-07-26 13:17:30.669412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.410 [2024-07-26 13:17:30.752222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.410 [2024-07-26 13:17:30.809539] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.410 [2024-07-26 13:17:30.809573] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.978 13:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:50.978 13:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:50.978 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:51.237 [2024-07-26 13:17:31.580259] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.237 [2024-07-26 13:17:31.580300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.237 [2024-07-26 13:17:31.580310] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:51.237 [2024-07-26 13:17:31.580321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:51.237 [2024-07-26 13:17:31.580329] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:51.237 [2024-07-26 13:17:31.580339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:51.237 [2024-07-26 13:17:31.580347] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:51.237 [2024-07-26 13:17:31.580357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:51.237 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:51.237 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:51.237 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:51.237 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:51.237 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:51.237 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:51.237 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:51.237 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:51.237 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:51.237 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:51.237 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.237 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.496 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:51.496 "name": "Existed_Raid", 00:17:51.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.496 "strip_size_kb": 64, 00:17:51.496 "state": "configuring", 00:17:51.496 "raid_level": "raid0", 00:17:51.496 "superblock": false, 00:17:51.496 "num_base_bdevs": 4, 00:17:51.496 "num_base_bdevs_discovered": 0, 00:17:51.496 "num_base_bdevs_operational": 4, 00:17:51.496 "base_bdevs_list": [ 00:17:51.496 { 00:17:51.496 "name": "BaseBdev1", 00:17:51.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.496 "is_configured": false, 00:17:51.497 "data_offset": 0, 00:17:51.497 "data_size": 0 00:17:51.497 }, 00:17:51.497 { 00:17:51.497 "name": "BaseBdev2", 00:17:51.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.497 "is_configured": false, 00:17:51.497 "data_offset": 0, 00:17:51.497 "data_size": 0 00:17:51.497 }, 00:17:51.497 { 00:17:51.497 "name": "BaseBdev3", 00:17:51.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.497 "is_configured": false, 00:17:51.497 "data_offset": 0, 00:17:51.497 "data_size": 0 00:17:51.497 }, 00:17:51.497 { 00:17:51.497 "name": "BaseBdev4", 00:17:51.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.497 "is_configured": false, 00:17:51.497 "data_offset": 0, 00:17:51.497 "data_size": 0 00:17:51.497 } 00:17:51.497 ] 00:17:51.497 }' 00:17:51.497 13:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:51.497 13:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.065 13:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:52.324 [2024-07-26 13:17:32.618866] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:52.324 [2024-07-26 13:17:32.618901] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15e8f60 name Existed_Raid, state configuring 00:17:52.324 13:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:52.324 [2024-07-26 13:17:32.847479] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:52.324 [2024-07-26 13:17:32.847510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:52.325 [2024-07-26 13:17:32.847519] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.325 [2024-07-26 13:17:32.847529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.325 [2024-07-26 13:17:32.847537] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:52.325 [2024-07-26 13:17:32.847547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:52.325 [2024-07-26 13:17:32.847555] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:52.325 [2024-07-26 13:17:32.847565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:52.584 13:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:52.584 [2024-07-26 13:17:33.085606] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.584 BaseBdev1 00:17:52.584 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:52.584 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:52.584 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:52.584 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:52.584 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:52.584 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:52.584 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.844 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:53.103 [ 00:17:53.103 { 00:17:53.103 "name": "BaseBdev1", 00:17:53.103 "aliases": [ 00:17:53.103 "5fb2e5c9-d706-4a40-bc89-e5d92694f79b" 00:17:53.103 ], 00:17:53.103 "product_name": "Malloc disk", 00:17:53.103 "block_size": 512, 00:17:53.103 "num_blocks": 65536, 00:17:53.103 "uuid": "5fb2e5c9-d706-4a40-bc89-e5d92694f79b", 00:17:53.103 "assigned_rate_limits": { 00:17:53.103 "rw_ios_per_sec": 0, 00:17:53.103 "rw_mbytes_per_sec": 0, 00:17:53.103 "r_mbytes_per_sec": 0, 00:17:53.103 "w_mbytes_per_sec": 0 00:17:53.103 }, 00:17:53.103 "claimed": true, 00:17:53.103 "claim_type": "exclusive_write", 00:17:53.103 "zoned": false, 00:17:53.103 "supported_io_types": { 00:17:53.103 "read": true, 00:17:53.103 "write": true, 00:17:53.103 "unmap": true, 00:17:53.103 "flush": true, 00:17:53.103 "reset": true, 00:17:53.103 "nvme_admin": false, 00:17:53.103 "nvme_io": false, 00:17:53.103 "nvme_io_md": false, 00:17:53.103 "write_zeroes": true, 00:17:53.103 "zcopy": true, 00:17:53.103 "get_zone_info": false, 00:17:53.103 "zone_management": false, 00:17:53.103 "zone_append": false, 00:17:53.103 "compare": false, 00:17:53.103 "compare_and_write": false, 00:17:53.103 "abort": true, 00:17:53.103 "seek_hole": false, 00:17:53.103 "seek_data": false, 00:17:53.103 "copy": true, 00:17:53.103 "nvme_iov_md": false 00:17:53.103 }, 00:17:53.103 "memory_domains": [ 00:17:53.103 { 00:17:53.103 "dma_device_id": "system", 00:17:53.103 "dma_device_type": 1 00:17:53.103 }, 00:17:53.103 { 00:17:53.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.103 "dma_device_type": 2 00:17:53.103 } 00:17:53.103 ], 00:17:53.103 "driver_specific": {} 00:17:53.103 } 00:17:53.103 ] 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.103 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.363 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.363 "name": "Existed_Raid", 00:17:53.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.363 "strip_size_kb": 64, 00:17:53.363 "state": "configuring", 00:17:53.363 "raid_level": "raid0", 00:17:53.363 "superblock": false, 00:17:53.363 "num_base_bdevs": 4, 00:17:53.363 "num_base_bdevs_discovered": 1, 00:17:53.363 "num_base_bdevs_operational": 4, 00:17:53.363 "base_bdevs_list": [ 00:17:53.363 { 00:17:53.363 "name": "BaseBdev1", 00:17:53.363 "uuid": "5fb2e5c9-d706-4a40-bc89-e5d92694f79b", 00:17:53.363 "is_configured": true, 00:17:53.363 "data_offset": 0, 00:17:53.363 "data_size": 65536 00:17:53.363 }, 00:17:53.363 { 00:17:53.363 "name": "BaseBdev2", 00:17:53.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.363 "is_configured": false, 00:17:53.363 "data_offset": 0, 00:17:53.363 "data_size": 0 00:17:53.363 }, 00:17:53.363 { 00:17:53.363 "name": "BaseBdev3", 00:17:53.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.363 "is_configured": false, 00:17:53.363 "data_offset": 0, 00:17:53.363 "data_size": 0 00:17:53.363 }, 00:17:53.363 { 00:17:53.363 "name": "BaseBdev4", 00:17:53.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.363 "is_configured": false, 00:17:53.363 "data_offset": 0, 00:17:53.363 "data_size": 0 00:17:53.363 } 00:17:53.363 ] 00:17:53.363 }' 00:17:53.363 13:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.363 13:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:53.931 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:54.191 [2024-07-26 13:17:34.581545] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.191 [2024-07-26 13:17:34.581586] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15e87d0 name Existed_Raid, state configuring 00:17:54.191 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:54.450 [2024-07-26 13:17:34.810194] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.450 [2024-07-26 13:17:34.811583] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.450 [2024-07-26 13:17:34.811618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.450 [2024-07-26 13:17:34.811628] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:54.450 [2024-07-26 13:17:34.811639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:54.450 [2024-07-26 13:17:34.811647] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:54.450 [2024-07-26 13:17:34.811657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.450 13:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.710 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.710 "name": "Existed_Raid", 00:17:54.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.710 "strip_size_kb": 64, 00:17:54.710 "state": "configuring", 00:17:54.710 "raid_level": "raid0", 00:17:54.710 "superblock": false, 00:17:54.710 "num_base_bdevs": 4, 00:17:54.710 "num_base_bdevs_discovered": 1, 00:17:54.710 "num_base_bdevs_operational": 4, 00:17:54.710 "base_bdevs_list": [ 00:17:54.710 { 00:17:54.710 "name": "BaseBdev1", 00:17:54.710 "uuid": "5fb2e5c9-d706-4a40-bc89-e5d92694f79b", 00:17:54.710 "is_configured": true, 00:17:54.710 "data_offset": 0, 00:17:54.710 "data_size": 65536 00:17:54.710 }, 00:17:54.710 { 00:17:54.710 "name": "BaseBdev2", 00:17:54.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.710 "is_configured": false, 00:17:54.710 "data_offset": 0, 00:17:54.710 "data_size": 0 00:17:54.710 }, 00:17:54.710 { 00:17:54.710 "name": "BaseBdev3", 00:17:54.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.710 "is_configured": false, 00:17:54.710 "data_offset": 0, 00:17:54.710 "data_size": 0 00:17:54.710 }, 00:17:54.710 { 00:17:54.710 "name": "BaseBdev4", 00:17:54.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.710 "is_configured": false, 00:17:54.710 "data_offset": 0, 00:17:54.710 "data_size": 0 00:17:54.710 } 00:17:54.710 ] 00:17:54.710 }' 00:17:54.710 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.710 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.279 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:55.279 [2024-07-26 13:17:35.739792] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:55.279 BaseBdev2 00:17:55.279 13:17:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:55.279 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:55.279 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:55.279 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:55.279 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:55.279 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:55.279 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:55.538 13:17:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:55.797 [ 00:17:55.797 { 00:17:55.797 "name": "BaseBdev2", 00:17:55.797 "aliases": [ 00:17:55.797 "23fa05e9-8748-4ab4-b172-22594d5d710d" 00:17:55.797 ], 00:17:55.797 "product_name": "Malloc disk", 00:17:55.797 "block_size": 512, 00:17:55.797 "num_blocks": 65536, 00:17:55.797 "uuid": "23fa05e9-8748-4ab4-b172-22594d5d710d", 00:17:55.797 "assigned_rate_limits": { 00:17:55.797 "rw_ios_per_sec": 0, 00:17:55.797 "rw_mbytes_per_sec": 0, 00:17:55.797 "r_mbytes_per_sec": 0, 00:17:55.797 "w_mbytes_per_sec": 0 00:17:55.797 }, 00:17:55.797 "claimed": true, 00:17:55.797 "claim_type": "exclusive_write", 00:17:55.797 "zoned": false, 00:17:55.797 "supported_io_types": { 00:17:55.797 "read": true, 00:17:55.797 "write": true, 00:17:55.797 "unmap": true, 00:17:55.797 "flush": true, 00:17:55.797 "reset": true, 00:17:55.797 "nvme_admin": false, 00:17:55.797 "nvme_io": false, 00:17:55.797 "nvme_io_md": false, 00:17:55.797 "write_zeroes": true, 00:17:55.797 "zcopy": true, 00:17:55.797 "get_zone_info": false, 00:17:55.797 "zone_management": false, 00:17:55.797 "zone_append": false, 00:17:55.797 "compare": false, 00:17:55.797 "compare_and_write": false, 00:17:55.797 "abort": true, 00:17:55.797 "seek_hole": false, 00:17:55.797 "seek_data": false, 00:17:55.797 "copy": true, 00:17:55.797 "nvme_iov_md": false 00:17:55.797 }, 00:17:55.797 "memory_domains": [ 00:17:55.797 { 00:17:55.797 "dma_device_id": "system", 00:17:55.797 "dma_device_type": 1 00:17:55.797 }, 00:17:55.797 { 00:17:55.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.797 "dma_device_type": 2 00:17:55.797 } 00:17:55.797 ], 00:17:55.797 "driver_specific": {} 00:17:55.797 } 00:17:55.797 ] 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.797 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.056 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.056 "name": "Existed_Raid", 00:17:56.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.056 "strip_size_kb": 64, 00:17:56.056 "state": "configuring", 00:17:56.056 "raid_level": "raid0", 00:17:56.056 "superblock": false, 00:17:56.056 "num_base_bdevs": 4, 00:17:56.056 "num_base_bdevs_discovered": 2, 00:17:56.056 "num_base_bdevs_operational": 4, 00:17:56.056 "base_bdevs_list": [ 00:17:56.056 { 00:17:56.056 "name": "BaseBdev1", 00:17:56.056 "uuid": "5fb2e5c9-d706-4a40-bc89-e5d92694f79b", 00:17:56.056 "is_configured": true, 00:17:56.056 "data_offset": 0, 00:17:56.056 "data_size": 65536 00:17:56.056 }, 00:17:56.056 { 00:17:56.056 "name": "BaseBdev2", 00:17:56.056 "uuid": "23fa05e9-8748-4ab4-b172-22594d5d710d", 00:17:56.056 "is_configured": true, 00:17:56.056 "data_offset": 0, 00:17:56.056 "data_size": 65536 00:17:56.056 }, 00:17:56.056 { 00:17:56.056 "name": "BaseBdev3", 00:17:56.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.056 "is_configured": false, 00:17:56.056 "data_offset": 0, 00:17:56.056 "data_size": 0 00:17:56.056 }, 00:17:56.056 { 00:17:56.056 "name": "BaseBdev4", 00:17:56.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.056 "is_configured": false, 00:17:56.056 "data_offset": 0, 00:17:56.056 "data_size": 0 00:17:56.056 } 00:17:56.056 ] 00:17:56.056 }' 00:17:56.056 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.056 13:17:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.625 13:17:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:56.884 [2024-07-26 13:17:37.202824] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:56.884 BaseBdev3 00:17:56.884 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:56.884 13:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:56.884 13:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:56.884 13:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:56.884 13:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:56.884 13:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:56.884 13:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:57.144 [ 00:17:57.144 { 00:17:57.144 "name": "BaseBdev3", 00:17:57.144 "aliases": [ 00:17:57.144 "e47b1cc1-91cb-483d-a338-5859af84b2e3" 00:17:57.144 ], 00:17:57.144 "product_name": "Malloc disk", 00:17:57.144 "block_size": 512, 00:17:57.144 "num_blocks": 65536, 00:17:57.144 "uuid": "e47b1cc1-91cb-483d-a338-5859af84b2e3", 00:17:57.144 "assigned_rate_limits": { 00:17:57.144 "rw_ios_per_sec": 0, 00:17:57.144 "rw_mbytes_per_sec": 0, 00:17:57.144 "r_mbytes_per_sec": 0, 00:17:57.144 "w_mbytes_per_sec": 0 00:17:57.144 }, 00:17:57.144 "claimed": true, 00:17:57.144 "claim_type": "exclusive_write", 00:17:57.144 "zoned": false, 00:17:57.144 "supported_io_types": { 00:17:57.144 "read": true, 00:17:57.144 "write": true, 00:17:57.144 "unmap": true, 00:17:57.144 "flush": true, 00:17:57.144 "reset": true, 00:17:57.144 "nvme_admin": false, 00:17:57.144 "nvme_io": false, 00:17:57.144 "nvme_io_md": false, 00:17:57.144 "write_zeroes": true, 00:17:57.144 "zcopy": true, 00:17:57.144 "get_zone_info": false, 00:17:57.144 "zone_management": false, 00:17:57.144 "zone_append": false, 00:17:57.144 "compare": false, 00:17:57.144 "compare_and_write": false, 00:17:57.144 "abort": true, 00:17:57.144 "seek_hole": false, 00:17:57.144 "seek_data": false, 00:17:57.144 "copy": true, 00:17:57.144 "nvme_iov_md": false 00:17:57.144 }, 00:17:57.144 "memory_domains": [ 00:17:57.144 { 00:17:57.144 "dma_device_id": "system", 00:17:57.144 "dma_device_type": 1 00:17:57.144 }, 00:17:57.144 { 00:17:57.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.144 "dma_device_type": 2 00:17:57.144 } 00:17:57.144 ], 00:17:57.144 "driver_specific": {} 00:17:57.144 } 00:17:57.144 ] 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.144 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.403 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.403 "name": "Existed_Raid", 00:17:57.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.403 "strip_size_kb": 64, 00:17:57.403 "state": "configuring", 00:17:57.403 "raid_level": "raid0", 00:17:57.403 "superblock": false, 00:17:57.403 "num_base_bdevs": 4, 00:17:57.403 "num_base_bdevs_discovered": 3, 00:17:57.403 "num_base_bdevs_operational": 4, 00:17:57.403 "base_bdevs_list": [ 00:17:57.403 { 00:17:57.403 "name": "BaseBdev1", 00:17:57.403 "uuid": "5fb2e5c9-d706-4a40-bc89-e5d92694f79b", 00:17:57.403 "is_configured": true, 00:17:57.403 "data_offset": 0, 00:17:57.403 "data_size": 65536 00:17:57.403 }, 00:17:57.403 { 00:17:57.403 "name": "BaseBdev2", 00:17:57.403 "uuid": "23fa05e9-8748-4ab4-b172-22594d5d710d", 00:17:57.403 "is_configured": true, 00:17:57.403 "data_offset": 0, 00:17:57.403 "data_size": 65536 00:17:57.403 }, 00:17:57.403 { 00:17:57.403 "name": "BaseBdev3", 00:17:57.403 "uuid": "e47b1cc1-91cb-483d-a338-5859af84b2e3", 00:17:57.403 "is_configured": true, 00:17:57.403 "data_offset": 0, 00:17:57.403 "data_size": 65536 00:17:57.403 }, 00:17:57.403 { 00:17:57.403 "name": "BaseBdev4", 00:17:57.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.403 "is_configured": false, 00:17:57.403 "data_offset": 0, 00:17:57.403 "data_size": 0 00:17:57.403 } 00:17:57.403 ] 00:17:57.403 }' 00:17:57.403 13:17:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.403 13:17:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.972 13:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:58.231 [2024-07-26 13:17:38.641770] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:58.231 [2024-07-26 13:17:38.641808] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x15e9840 00:17:58.231 [2024-07-26 13:17:38.641816] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:58.231 [2024-07-26 13:17:38.641991] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15e9480 00:17:58.231 [2024-07-26 13:17:38.642104] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15e9840 00:17:58.231 [2024-07-26 13:17:38.642113] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15e9840 00:17:58.231 [2024-07-26 13:17:38.642274] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.231 BaseBdev4 00:17:58.231 13:17:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:17:58.231 13:17:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:58.231 13:17:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:58.231 13:17:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:58.231 13:17:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:58.232 13:17:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:58.232 13:17:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:58.491 13:17:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:58.750 [ 00:17:58.750 { 00:17:58.750 "name": "BaseBdev4", 00:17:58.750 "aliases": [ 00:17:58.751 "bd36cb7a-8de3-4d1e-868e-40e750e2f66b" 00:17:58.751 ], 00:17:58.751 "product_name": "Malloc disk", 00:17:58.751 "block_size": 512, 00:17:58.751 "num_blocks": 65536, 00:17:58.751 "uuid": "bd36cb7a-8de3-4d1e-868e-40e750e2f66b", 00:17:58.751 "assigned_rate_limits": { 00:17:58.751 "rw_ios_per_sec": 0, 00:17:58.751 "rw_mbytes_per_sec": 0, 00:17:58.751 "r_mbytes_per_sec": 0, 00:17:58.751 "w_mbytes_per_sec": 0 00:17:58.751 }, 00:17:58.751 "claimed": true, 00:17:58.751 "claim_type": "exclusive_write", 00:17:58.751 "zoned": false, 00:17:58.751 "supported_io_types": { 00:17:58.751 "read": true, 00:17:58.751 "write": true, 00:17:58.751 "unmap": true, 00:17:58.751 "flush": true, 00:17:58.751 "reset": true, 00:17:58.751 "nvme_admin": false, 00:17:58.751 "nvme_io": false, 00:17:58.751 "nvme_io_md": false, 00:17:58.751 "write_zeroes": true, 00:17:58.751 "zcopy": true, 00:17:58.751 "get_zone_info": false, 00:17:58.751 "zone_management": false, 00:17:58.751 "zone_append": false, 00:17:58.751 "compare": false, 00:17:58.751 "compare_and_write": false, 00:17:58.751 "abort": true, 00:17:58.751 "seek_hole": false, 00:17:58.751 "seek_data": false, 00:17:58.751 "copy": true, 00:17:58.751 "nvme_iov_md": false 00:17:58.751 }, 00:17:58.751 "memory_domains": [ 00:17:58.751 { 00:17:58.751 "dma_device_id": "system", 00:17:58.751 "dma_device_type": 1 00:17:58.751 }, 00:17:58.751 { 00:17:58.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.751 "dma_device_type": 2 00:17:58.751 } 00:17:58.751 ], 00:17:58.751 "driver_specific": {} 00:17:58.751 } 00:17:58.751 ] 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.751 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.010 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.010 "name": "Existed_Raid", 00:17:59.010 "uuid": "327005ad-28bc-4b18-8a04-36196a6a0423", 00:17:59.010 "strip_size_kb": 64, 00:17:59.010 "state": "online", 00:17:59.010 "raid_level": "raid0", 00:17:59.010 "superblock": false, 00:17:59.010 "num_base_bdevs": 4, 00:17:59.010 "num_base_bdevs_discovered": 4, 00:17:59.010 "num_base_bdevs_operational": 4, 00:17:59.010 "base_bdevs_list": [ 00:17:59.010 { 00:17:59.010 "name": "BaseBdev1", 00:17:59.010 "uuid": "5fb2e5c9-d706-4a40-bc89-e5d92694f79b", 00:17:59.010 "is_configured": true, 00:17:59.010 "data_offset": 0, 00:17:59.010 "data_size": 65536 00:17:59.010 }, 00:17:59.010 { 00:17:59.010 "name": "BaseBdev2", 00:17:59.010 "uuid": "23fa05e9-8748-4ab4-b172-22594d5d710d", 00:17:59.010 "is_configured": true, 00:17:59.010 "data_offset": 0, 00:17:59.010 "data_size": 65536 00:17:59.010 }, 00:17:59.010 { 00:17:59.010 "name": "BaseBdev3", 00:17:59.010 "uuid": "e47b1cc1-91cb-483d-a338-5859af84b2e3", 00:17:59.010 "is_configured": true, 00:17:59.010 "data_offset": 0, 00:17:59.010 "data_size": 65536 00:17:59.010 }, 00:17:59.010 { 00:17:59.010 "name": "BaseBdev4", 00:17:59.010 "uuid": "bd36cb7a-8de3-4d1e-868e-40e750e2f66b", 00:17:59.010 "is_configured": true, 00:17:59.010 "data_offset": 0, 00:17:59.010 "data_size": 65536 00:17:59.010 } 00:17:59.010 ] 00:17:59.010 }' 00:17:59.010 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.010 13:17:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.587 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:59.587 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:59.587 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:59.587 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:59.587 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:59.587 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:59.587 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:59.587 13:17:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:59.851 [2024-07-26 13:17:40.130013] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.851 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:59.851 "name": "Existed_Raid", 00:17:59.851 "aliases": [ 00:17:59.851 "327005ad-28bc-4b18-8a04-36196a6a0423" 00:17:59.851 ], 00:17:59.851 "product_name": "Raid Volume", 00:17:59.851 "block_size": 512, 00:17:59.851 "num_blocks": 262144, 00:17:59.851 "uuid": "327005ad-28bc-4b18-8a04-36196a6a0423", 00:17:59.851 "assigned_rate_limits": { 00:17:59.851 "rw_ios_per_sec": 0, 00:17:59.851 "rw_mbytes_per_sec": 0, 00:17:59.851 "r_mbytes_per_sec": 0, 00:17:59.851 "w_mbytes_per_sec": 0 00:17:59.851 }, 00:17:59.851 "claimed": false, 00:17:59.851 "zoned": false, 00:17:59.851 "supported_io_types": { 00:17:59.851 "read": true, 00:17:59.851 "write": true, 00:17:59.851 "unmap": true, 00:17:59.851 "flush": true, 00:17:59.851 "reset": true, 00:17:59.851 "nvme_admin": false, 00:17:59.851 "nvme_io": false, 00:17:59.851 "nvme_io_md": false, 00:17:59.851 "write_zeroes": true, 00:17:59.851 "zcopy": false, 00:17:59.851 "get_zone_info": false, 00:17:59.851 "zone_management": false, 00:17:59.851 "zone_append": false, 00:17:59.851 "compare": false, 00:17:59.851 "compare_and_write": false, 00:17:59.851 "abort": false, 00:17:59.851 "seek_hole": false, 00:17:59.851 "seek_data": false, 00:17:59.851 "copy": false, 00:17:59.851 "nvme_iov_md": false 00:17:59.851 }, 00:17:59.851 "memory_domains": [ 00:17:59.851 { 00:17:59.851 "dma_device_id": "system", 00:17:59.851 "dma_device_type": 1 00:17:59.851 }, 00:17:59.851 { 00:17:59.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.851 "dma_device_type": 2 00:17:59.851 }, 00:17:59.851 { 00:17:59.851 "dma_device_id": "system", 00:17:59.851 "dma_device_type": 1 00:17:59.851 }, 00:17:59.851 { 00:17:59.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.851 "dma_device_type": 2 00:17:59.851 }, 00:17:59.851 { 00:17:59.851 "dma_device_id": "system", 00:17:59.851 "dma_device_type": 1 00:17:59.852 }, 00:17:59.852 { 00:17:59.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.852 "dma_device_type": 2 00:17:59.852 }, 00:17:59.852 { 00:17:59.852 "dma_device_id": "system", 00:17:59.852 "dma_device_type": 1 00:17:59.852 }, 00:17:59.852 { 00:17:59.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.852 "dma_device_type": 2 00:17:59.852 } 00:17:59.852 ], 00:17:59.852 "driver_specific": { 00:17:59.852 "raid": { 00:17:59.852 "uuid": "327005ad-28bc-4b18-8a04-36196a6a0423", 00:17:59.852 "strip_size_kb": 64, 00:17:59.852 "state": "online", 00:17:59.852 "raid_level": "raid0", 00:17:59.852 "superblock": false, 00:17:59.852 "num_base_bdevs": 4, 00:17:59.852 "num_base_bdevs_discovered": 4, 00:17:59.852 "num_base_bdevs_operational": 4, 00:17:59.852 "base_bdevs_list": [ 00:17:59.852 { 00:17:59.852 "name": "BaseBdev1", 00:17:59.852 "uuid": "5fb2e5c9-d706-4a40-bc89-e5d92694f79b", 00:17:59.852 "is_configured": true, 00:17:59.852 "data_offset": 0, 00:17:59.852 "data_size": 65536 00:17:59.852 }, 00:17:59.852 { 00:17:59.852 "name": "BaseBdev2", 00:17:59.852 "uuid": "23fa05e9-8748-4ab4-b172-22594d5d710d", 00:17:59.852 "is_configured": true, 00:17:59.852 "data_offset": 0, 00:17:59.852 "data_size": 65536 00:17:59.852 }, 00:17:59.852 { 00:17:59.852 "name": "BaseBdev3", 00:17:59.852 "uuid": "e47b1cc1-91cb-483d-a338-5859af84b2e3", 00:17:59.852 "is_configured": true, 00:17:59.852 "data_offset": 0, 00:17:59.852 "data_size": 65536 00:17:59.852 }, 00:17:59.852 { 00:17:59.852 "name": "BaseBdev4", 00:17:59.852 "uuid": "bd36cb7a-8de3-4d1e-868e-40e750e2f66b", 00:17:59.852 "is_configured": true, 00:17:59.852 "data_offset": 0, 00:17:59.852 "data_size": 65536 00:17:59.852 } 00:17:59.852 ] 00:17:59.852 } 00:17:59.852 } 00:17:59.852 }' 00:17:59.852 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.852 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:59.852 BaseBdev2 00:17:59.852 BaseBdev3 00:17:59.852 BaseBdev4' 00:17:59.852 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:59.852 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:59.852 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:00.111 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:00.111 "name": "BaseBdev1", 00:18:00.111 "aliases": [ 00:18:00.111 "5fb2e5c9-d706-4a40-bc89-e5d92694f79b" 00:18:00.111 ], 00:18:00.111 "product_name": "Malloc disk", 00:18:00.111 "block_size": 512, 00:18:00.111 "num_blocks": 65536, 00:18:00.111 "uuid": "5fb2e5c9-d706-4a40-bc89-e5d92694f79b", 00:18:00.111 "assigned_rate_limits": { 00:18:00.111 "rw_ios_per_sec": 0, 00:18:00.111 "rw_mbytes_per_sec": 0, 00:18:00.111 "r_mbytes_per_sec": 0, 00:18:00.111 "w_mbytes_per_sec": 0 00:18:00.111 }, 00:18:00.111 "claimed": true, 00:18:00.111 "claim_type": "exclusive_write", 00:18:00.111 "zoned": false, 00:18:00.111 "supported_io_types": { 00:18:00.111 "read": true, 00:18:00.111 "write": true, 00:18:00.111 "unmap": true, 00:18:00.111 "flush": true, 00:18:00.111 "reset": true, 00:18:00.111 "nvme_admin": false, 00:18:00.111 "nvme_io": false, 00:18:00.111 "nvme_io_md": false, 00:18:00.111 "write_zeroes": true, 00:18:00.111 "zcopy": true, 00:18:00.111 "get_zone_info": false, 00:18:00.111 "zone_management": false, 00:18:00.111 "zone_append": false, 00:18:00.111 "compare": false, 00:18:00.111 "compare_and_write": false, 00:18:00.111 "abort": true, 00:18:00.111 "seek_hole": false, 00:18:00.111 "seek_data": false, 00:18:00.111 "copy": true, 00:18:00.111 "nvme_iov_md": false 00:18:00.111 }, 00:18:00.111 "memory_domains": [ 00:18:00.111 { 00:18:00.111 "dma_device_id": "system", 00:18:00.111 "dma_device_type": 1 00:18:00.111 }, 00:18:00.111 { 00:18:00.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.111 "dma_device_type": 2 00:18:00.111 } 00:18:00.111 ], 00:18:00.111 "driver_specific": {} 00:18:00.111 }' 00:18:00.111 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.111 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.111 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:00.111 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.111 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.111 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:00.111 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.370 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.370 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:00.370 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.370 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.370 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:00.370 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:00.370 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:00.370 13:17:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:00.630 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:00.630 "name": "BaseBdev2", 00:18:00.630 "aliases": [ 00:18:00.630 "23fa05e9-8748-4ab4-b172-22594d5d710d" 00:18:00.630 ], 00:18:00.630 "product_name": "Malloc disk", 00:18:00.630 "block_size": 512, 00:18:00.630 "num_blocks": 65536, 00:18:00.630 "uuid": "23fa05e9-8748-4ab4-b172-22594d5d710d", 00:18:00.630 "assigned_rate_limits": { 00:18:00.630 "rw_ios_per_sec": 0, 00:18:00.630 "rw_mbytes_per_sec": 0, 00:18:00.630 "r_mbytes_per_sec": 0, 00:18:00.630 "w_mbytes_per_sec": 0 00:18:00.630 }, 00:18:00.630 "claimed": true, 00:18:00.630 "claim_type": "exclusive_write", 00:18:00.630 "zoned": false, 00:18:00.630 "supported_io_types": { 00:18:00.630 "read": true, 00:18:00.630 "write": true, 00:18:00.631 "unmap": true, 00:18:00.631 "flush": true, 00:18:00.631 "reset": true, 00:18:00.631 "nvme_admin": false, 00:18:00.631 "nvme_io": false, 00:18:00.631 "nvme_io_md": false, 00:18:00.631 "write_zeroes": true, 00:18:00.631 "zcopy": true, 00:18:00.631 "get_zone_info": false, 00:18:00.631 "zone_management": false, 00:18:00.631 "zone_append": false, 00:18:00.631 "compare": false, 00:18:00.631 "compare_and_write": false, 00:18:00.631 "abort": true, 00:18:00.631 "seek_hole": false, 00:18:00.631 "seek_data": false, 00:18:00.631 "copy": true, 00:18:00.631 "nvme_iov_md": false 00:18:00.631 }, 00:18:00.631 "memory_domains": [ 00:18:00.631 { 00:18:00.631 "dma_device_id": "system", 00:18:00.631 "dma_device_type": 1 00:18:00.631 }, 00:18:00.631 { 00:18:00.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.631 "dma_device_type": 2 00:18:00.631 } 00:18:00.631 ], 00:18:00.631 "driver_specific": {} 00:18:00.631 }' 00:18:00.631 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.631 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.631 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:00.631 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.631 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.952 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:00.952 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.952 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.952 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:00.952 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.952 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.952 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:00.952 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:00.952 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:00.952 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:01.211 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:01.211 "name": "BaseBdev3", 00:18:01.211 "aliases": [ 00:18:01.211 "e47b1cc1-91cb-483d-a338-5859af84b2e3" 00:18:01.211 ], 00:18:01.211 "product_name": "Malloc disk", 00:18:01.211 "block_size": 512, 00:18:01.211 "num_blocks": 65536, 00:18:01.211 "uuid": "e47b1cc1-91cb-483d-a338-5859af84b2e3", 00:18:01.211 "assigned_rate_limits": { 00:18:01.211 "rw_ios_per_sec": 0, 00:18:01.211 "rw_mbytes_per_sec": 0, 00:18:01.211 "r_mbytes_per_sec": 0, 00:18:01.211 "w_mbytes_per_sec": 0 00:18:01.211 }, 00:18:01.211 "claimed": true, 00:18:01.211 "claim_type": "exclusive_write", 00:18:01.211 "zoned": false, 00:18:01.211 "supported_io_types": { 00:18:01.211 "read": true, 00:18:01.211 "write": true, 00:18:01.211 "unmap": true, 00:18:01.211 "flush": true, 00:18:01.211 "reset": true, 00:18:01.211 "nvme_admin": false, 00:18:01.211 "nvme_io": false, 00:18:01.211 "nvme_io_md": false, 00:18:01.211 "write_zeroes": true, 00:18:01.211 "zcopy": true, 00:18:01.211 "get_zone_info": false, 00:18:01.211 "zone_management": false, 00:18:01.212 "zone_append": false, 00:18:01.212 "compare": false, 00:18:01.212 "compare_and_write": false, 00:18:01.212 "abort": true, 00:18:01.212 "seek_hole": false, 00:18:01.212 "seek_data": false, 00:18:01.212 "copy": true, 00:18:01.212 "nvme_iov_md": false 00:18:01.212 }, 00:18:01.212 "memory_domains": [ 00:18:01.212 { 00:18:01.212 "dma_device_id": "system", 00:18:01.212 "dma_device_type": 1 00:18:01.212 }, 00:18:01.212 { 00:18:01.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.212 "dma_device_type": 2 00:18:01.212 } 00:18:01.212 ], 00:18:01.212 "driver_specific": {} 00:18:01.212 }' 00:18:01.212 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.212 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.212 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:01.212 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.212 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.471 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:01.471 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.471 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.471 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:01.471 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.471 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.471 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:01.471 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:01.471 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:01.471 13:17:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:01.730 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:01.730 "name": "BaseBdev4", 00:18:01.730 "aliases": [ 00:18:01.730 "bd36cb7a-8de3-4d1e-868e-40e750e2f66b" 00:18:01.730 ], 00:18:01.730 "product_name": "Malloc disk", 00:18:01.730 "block_size": 512, 00:18:01.730 "num_blocks": 65536, 00:18:01.730 "uuid": "bd36cb7a-8de3-4d1e-868e-40e750e2f66b", 00:18:01.730 "assigned_rate_limits": { 00:18:01.730 "rw_ios_per_sec": 0, 00:18:01.730 "rw_mbytes_per_sec": 0, 00:18:01.730 "r_mbytes_per_sec": 0, 00:18:01.730 "w_mbytes_per_sec": 0 00:18:01.730 }, 00:18:01.730 "claimed": true, 00:18:01.730 "claim_type": "exclusive_write", 00:18:01.730 "zoned": false, 00:18:01.730 "supported_io_types": { 00:18:01.730 "read": true, 00:18:01.730 "write": true, 00:18:01.730 "unmap": true, 00:18:01.730 "flush": true, 00:18:01.730 "reset": true, 00:18:01.730 "nvme_admin": false, 00:18:01.730 "nvme_io": false, 00:18:01.730 "nvme_io_md": false, 00:18:01.730 "write_zeroes": true, 00:18:01.730 "zcopy": true, 00:18:01.730 "get_zone_info": false, 00:18:01.730 "zone_management": false, 00:18:01.730 "zone_append": false, 00:18:01.730 "compare": false, 00:18:01.730 "compare_and_write": false, 00:18:01.730 "abort": true, 00:18:01.730 "seek_hole": false, 00:18:01.730 "seek_data": false, 00:18:01.730 "copy": true, 00:18:01.730 "nvme_iov_md": false 00:18:01.730 }, 00:18:01.730 "memory_domains": [ 00:18:01.730 { 00:18:01.730 "dma_device_id": "system", 00:18:01.730 "dma_device_type": 1 00:18:01.730 }, 00:18:01.730 { 00:18:01.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.730 "dma_device_type": 2 00:18:01.730 } 00:18:01.730 ], 00:18:01.730 "driver_specific": {} 00:18:01.730 }' 00:18:01.730 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.730 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.730 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:01.730 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.989 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.989 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:01.989 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.989 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.989 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:01.989 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.989 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.989 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:01.989 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:02.248 [2024-07-26 13:17:42.672453] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.248 [2024-07-26 13:17:42.672480] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.248 [2024-07-26 13:17:42.672524] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.248 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:02.508 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:02.508 "name": "Existed_Raid", 00:18:02.508 "uuid": "327005ad-28bc-4b18-8a04-36196a6a0423", 00:18:02.508 "strip_size_kb": 64, 00:18:02.508 "state": "offline", 00:18:02.508 "raid_level": "raid0", 00:18:02.508 "superblock": false, 00:18:02.508 "num_base_bdevs": 4, 00:18:02.508 "num_base_bdevs_discovered": 3, 00:18:02.508 "num_base_bdevs_operational": 3, 00:18:02.508 "base_bdevs_list": [ 00:18:02.508 { 00:18:02.508 "name": null, 00:18:02.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.508 "is_configured": false, 00:18:02.508 "data_offset": 0, 00:18:02.508 "data_size": 65536 00:18:02.508 }, 00:18:02.508 { 00:18:02.508 "name": "BaseBdev2", 00:18:02.508 "uuid": "23fa05e9-8748-4ab4-b172-22594d5d710d", 00:18:02.508 "is_configured": true, 00:18:02.508 "data_offset": 0, 00:18:02.508 "data_size": 65536 00:18:02.508 }, 00:18:02.508 { 00:18:02.508 "name": "BaseBdev3", 00:18:02.508 "uuid": "e47b1cc1-91cb-483d-a338-5859af84b2e3", 00:18:02.508 "is_configured": true, 00:18:02.508 "data_offset": 0, 00:18:02.508 "data_size": 65536 00:18:02.508 }, 00:18:02.508 { 00:18:02.508 "name": "BaseBdev4", 00:18:02.508 "uuid": "bd36cb7a-8de3-4d1e-868e-40e750e2f66b", 00:18:02.508 "is_configured": true, 00:18:02.508 "data_offset": 0, 00:18:02.508 "data_size": 65536 00:18:02.508 } 00:18:02.508 ] 00:18:02.508 }' 00:18:02.508 13:17:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:02.508 13:17:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.075 13:17:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:03.075 13:17:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:03.075 13:17:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.075 13:17:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:03.334 13:17:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:03.334 13:17:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.334 13:17:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:03.594 [2024-07-26 13:17:43.944783] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:03.594 13:17:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:03.594 13:17:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:03.594 13:17:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.594 13:17:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:03.853 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:03.853 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:03.853 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:04.113 [2024-07-26 13:17:44.412132] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:04.113 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:04.113 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:04.113 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:04.113 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.372 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:04.372 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:04.372 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:04.372 [2024-07-26 13:17:44.875303] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:04.372 [2024-07-26 13:17:44.875345] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15e9840 name Existed_Raid, state offline 00:18:04.632 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:04.632 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:04.632 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:04.632 13:17:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.632 13:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:04.632 13:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:04.632 13:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:18:04.632 13:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:04.632 13:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:04.632 13:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:04.891 BaseBdev2 00:18:04.891 13:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:04.891 13:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:04.891 13:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:04.891 13:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:04.891 13:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:04.891 13:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:04.891 13:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:05.150 13:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:05.410 [ 00:18:05.410 { 00:18:05.410 "name": "BaseBdev2", 00:18:05.410 "aliases": [ 00:18:05.410 "8749105b-38fb-4cd2-9de1-69504f784d42" 00:18:05.410 ], 00:18:05.410 "product_name": "Malloc disk", 00:18:05.410 "block_size": 512, 00:18:05.410 "num_blocks": 65536, 00:18:05.410 "uuid": "8749105b-38fb-4cd2-9de1-69504f784d42", 00:18:05.410 "assigned_rate_limits": { 00:18:05.410 "rw_ios_per_sec": 0, 00:18:05.410 "rw_mbytes_per_sec": 0, 00:18:05.410 "r_mbytes_per_sec": 0, 00:18:05.410 "w_mbytes_per_sec": 0 00:18:05.410 }, 00:18:05.410 "claimed": false, 00:18:05.410 "zoned": false, 00:18:05.410 "supported_io_types": { 00:18:05.410 "read": true, 00:18:05.410 "write": true, 00:18:05.410 "unmap": true, 00:18:05.410 "flush": true, 00:18:05.410 "reset": true, 00:18:05.410 "nvme_admin": false, 00:18:05.410 "nvme_io": false, 00:18:05.410 "nvme_io_md": false, 00:18:05.410 "write_zeroes": true, 00:18:05.410 "zcopy": true, 00:18:05.410 "get_zone_info": false, 00:18:05.410 "zone_management": false, 00:18:05.410 "zone_append": false, 00:18:05.410 "compare": false, 00:18:05.410 "compare_and_write": false, 00:18:05.410 "abort": true, 00:18:05.410 "seek_hole": false, 00:18:05.410 "seek_data": false, 00:18:05.410 "copy": true, 00:18:05.410 "nvme_iov_md": false 00:18:05.410 }, 00:18:05.410 "memory_domains": [ 00:18:05.410 { 00:18:05.410 "dma_device_id": "system", 00:18:05.410 "dma_device_type": 1 00:18:05.410 }, 00:18:05.410 { 00:18:05.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.410 "dma_device_type": 2 00:18:05.410 } 00:18:05.410 ], 00:18:05.410 "driver_specific": {} 00:18:05.410 } 00:18:05.410 ] 00:18:05.410 13:17:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:05.410 13:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:05.410 13:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:05.410 13:17:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:05.669 BaseBdev3 00:18:05.669 13:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:05.669 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:05.669 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:05.669 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:05.669 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:05.669 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:05.669 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:05.928 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:06.187 [ 00:18:06.187 { 00:18:06.187 "name": "BaseBdev3", 00:18:06.187 "aliases": [ 00:18:06.187 "21b896b7-40b8-4ea2-b1bf-968806ca6b21" 00:18:06.187 ], 00:18:06.187 "product_name": "Malloc disk", 00:18:06.187 "block_size": 512, 00:18:06.187 "num_blocks": 65536, 00:18:06.187 "uuid": "21b896b7-40b8-4ea2-b1bf-968806ca6b21", 00:18:06.187 "assigned_rate_limits": { 00:18:06.187 "rw_ios_per_sec": 0, 00:18:06.187 "rw_mbytes_per_sec": 0, 00:18:06.187 "r_mbytes_per_sec": 0, 00:18:06.187 "w_mbytes_per_sec": 0 00:18:06.187 }, 00:18:06.187 "claimed": false, 00:18:06.187 "zoned": false, 00:18:06.187 "supported_io_types": { 00:18:06.187 "read": true, 00:18:06.187 "write": true, 00:18:06.187 "unmap": true, 00:18:06.187 "flush": true, 00:18:06.187 "reset": true, 00:18:06.187 "nvme_admin": false, 00:18:06.187 "nvme_io": false, 00:18:06.187 "nvme_io_md": false, 00:18:06.187 "write_zeroes": true, 00:18:06.187 "zcopy": true, 00:18:06.187 "get_zone_info": false, 00:18:06.187 "zone_management": false, 00:18:06.187 "zone_append": false, 00:18:06.187 "compare": false, 00:18:06.187 "compare_and_write": false, 00:18:06.187 "abort": true, 00:18:06.187 "seek_hole": false, 00:18:06.187 "seek_data": false, 00:18:06.187 "copy": true, 00:18:06.187 "nvme_iov_md": false 00:18:06.187 }, 00:18:06.187 "memory_domains": [ 00:18:06.187 { 00:18:06.187 "dma_device_id": "system", 00:18:06.187 "dma_device_type": 1 00:18:06.187 }, 00:18:06.187 { 00:18:06.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.187 "dma_device_type": 2 00:18:06.187 } 00:18:06.187 ], 00:18:06.187 "driver_specific": {} 00:18:06.187 } 00:18:06.187 ] 00:18:06.187 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:06.187 13:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:06.187 13:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:06.187 13:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:06.187 BaseBdev4 00:18:06.447 13:17:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:18:06.447 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:06.447 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:06.447 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:06.447 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:06.447 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:06.447 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.447 13:17:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:06.706 [ 00:18:06.706 { 00:18:06.706 "name": "BaseBdev4", 00:18:06.706 "aliases": [ 00:18:06.706 "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c" 00:18:06.706 ], 00:18:06.706 "product_name": "Malloc disk", 00:18:06.706 "block_size": 512, 00:18:06.706 "num_blocks": 65536, 00:18:06.706 "uuid": "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c", 00:18:06.706 "assigned_rate_limits": { 00:18:06.706 "rw_ios_per_sec": 0, 00:18:06.706 "rw_mbytes_per_sec": 0, 00:18:06.706 "r_mbytes_per_sec": 0, 00:18:06.706 "w_mbytes_per_sec": 0 00:18:06.706 }, 00:18:06.706 "claimed": false, 00:18:06.706 "zoned": false, 00:18:06.706 "supported_io_types": { 00:18:06.706 "read": true, 00:18:06.706 "write": true, 00:18:06.706 "unmap": true, 00:18:06.706 "flush": true, 00:18:06.706 "reset": true, 00:18:06.706 "nvme_admin": false, 00:18:06.706 "nvme_io": false, 00:18:06.706 "nvme_io_md": false, 00:18:06.706 "write_zeroes": true, 00:18:06.706 "zcopy": true, 00:18:06.706 "get_zone_info": false, 00:18:06.706 "zone_management": false, 00:18:06.706 "zone_append": false, 00:18:06.706 "compare": false, 00:18:06.706 "compare_and_write": false, 00:18:06.706 "abort": true, 00:18:06.706 "seek_hole": false, 00:18:06.706 "seek_data": false, 00:18:06.706 "copy": true, 00:18:06.706 "nvme_iov_md": false 00:18:06.706 }, 00:18:06.706 "memory_domains": [ 00:18:06.706 { 00:18:06.706 "dma_device_id": "system", 00:18:06.706 "dma_device_type": 1 00:18:06.706 }, 00:18:06.706 { 00:18:06.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.706 "dma_device_type": 2 00:18:06.706 } 00:18:06.706 ], 00:18:06.706 "driver_specific": {} 00:18:06.706 } 00:18:06.706 ] 00:18:06.706 13:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:06.706 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:06.706 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:06.706 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:06.966 [2024-07-26 13:17:47.356764] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:06.966 [2024-07-26 13:17:47.356802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:06.966 [2024-07-26 13:17:47.356819] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:06.966 [2024-07-26 13:17:47.357967] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:06.966 [2024-07-26 13:17:47.358009] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:06.966 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:06.966 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:06.966 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:06.966 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:06.966 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:06.966 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:06.966 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:06.966 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:06.966 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:06.966 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:06.966 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.966 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.225 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.225 "name": "Existed_Raid", 00:18:07.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.225 "strip_size_kb": 64, 00:18:07.225 "state": "configuring", 00:18:07.225 "raid_level": "raid0", 00:18:07.225 "superblock": false, 00:18:07.225 "num_base_bdevs": 4, 00:18:07.225 "num_base_bdevs_discovered": 3, 00:18:07.225 "num_base_bdevs_operational": 4, 00:18:07.225 "base_bdevs_list": [ 00:18:07.225 { 00:18:07.225 "name": "BaseBdev1", 00:18:07.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.225 "is_configured": false, 00:18:07.225 "data_offset": 0, 00:18:07.225 "data_size": 0 00:18:07.225 }, 00:18:07.225 { 00:18:07.225 "name": "BaseBdev2", 00:18:07.225 "uuid": "8749105b-38fb-4cd2-9de1-69504f784d42", 00:18:07.225 "is_configured": true, 00:18:07.225 "data_offset": 0, 00:18:07.225 "data_size": 65536 00:18:07.225 }, 00:18:07.225 { 00:18:07.226 "name": "BaseBdev3", 00:18:07.226 "uuid": "21b896b7-40b8-4ea2-b1bf-968806ca6b21", 00:18:07.226 "is_configured": true, 00:18:07.226 "data_offset": 0, 00:18:07.226 "data_size": 65536 00:18:07.226 }, 00:18:07.226 { 00:18:07.226 "name": "BaseBdev4", 00:18:07.226 "uuid": "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c", 00:18:07.226 "is_configured": true, 00:18:07.226 "data_offset": 0, 00:18:07.226 "data_size": 65536 00:18:07.226 } 00:18:07.226 ] 00:18:07.226 }' 00:18:07.226 13:17:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.226 13:17:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.794 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:08.053 [2024-07-26 13:17:48.403504] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:08.053 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:08.053 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:08.053 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:08.053 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:08.053 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:08.053 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:08.053 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.053 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.053 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.053 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.053 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.053 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.312 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.312 "name": "Existed_Raid", 00:18:08.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.312 "strip_size_kb": 64, 00:18:08.312 "state": "configuring", 00:18:08.312 "raid_level": "raid0", 00:18:08.312 "superblock": false, 00:18:08.312 "num_base_bdevs": 4, 00:18:08.312 "num_base_bdevs_discovered": 2, 00:18:08.312 "num_base_bdevs_operational": 4, 00:18:08.312 "base_bdevs_list": [ 00:18:08.313 { 00:18:08.313 "name": "BaseBdev1", 00:18:08.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.313 "is_configured": false, 00:18:08.313 "data_offset": 0, 00:18:08.313 "data_size": 0 00:18:08.313 }, 00:18:08.313 { 00:18:08.313 "name": null, 00:18:08.313 "uuid": "8749105b-38fb-4cd2-9de1-69504f784d42", 00:18:08.313 "is_configured": false, 00:18:08.313 "data_offset": 0, 00:18:08.313 "data_size": 65536 00:18:08.313 }, 00:18:08.313 { 00:18:08.313 "name": "BaseBdev3", 00:18:08.313 "uuid": "21b896b7-40b8-4ea2-b1bf-968806ca6b21", 00:18:08.313 "is_configured": true, 00:18:08.313 "data_offset": 0, 00:18:08.313 "data_size": 65536 00:18:08.313 }, 00:18:08.313 { 00:18:08.313 "name": "BaseBdev4", 00:18:08.313 "uuid": "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c", 00:18:08.313 "is_configured": true, 00:18:08.313 "data_offset": 0, 00:18:08.313 "data_size": 65536 00:18:08.313 } 00:18:08.313 ] 00:18:08.313 }' 00:18:08.313 13:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.313 13:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.881 13:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:08.881 13:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.140 13:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:09.140 13:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:09.140 [2024-07-26 13:17:49.637907] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.140 BaseBdev1 00:18:09.140 13:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:09.140 13:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:09.140 13:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:09.140 13:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:09.140 13:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:09.140 13:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:09.140 13:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:09.400 13:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:09.659 [ 00:18:09.659 { 00:18:09.659 "name": "BaseBdev1", 00:18:09.659 "aliases": [ 00:18:09.659 "b294906b-6e61-4d68-b55b-04a07a5a7687" 00:18:09.659 ], 00:18:09.659 "product_name": "Malloc disk", 00:18:09.659 "block_size": 512, 00:18:09.659 "num_blocks": 65536, 00:18:09.659 "uuid": "b294906b-6e61-4d68-b55b-04a07a5a7687", 00:18:09.659 "assigned_rate_limits": { 00:18:09.659 "rw_ios_per_sec": 0, 00:18:09.659 "rw_mbytes_per_sec": 0, 00:18:09.659 "r_mbytes_per_sec": 0, 00:18:09.659 "w_mbytes_per_sec": 0 00:18:09.659 }, 00:18:09.659 "claimed": true, 00:18:09.659 "claim_type": "exclusive_write", 00:18:09.659 "zoned": false, 00:18:09.659 "supported_io_types": { 00:18:09.659 "read": true, 00:18:09.659 "write": true, 00:18:09.659 "unmap": true, 00:18:09.659 "flush": true, 00:18:09.659 "reset": true, 00:18:09.659 "nvme_admin": false, 00:18:09.659 "nvme_io": false, 00:18:09.659 "nvme_io_md": false, 00:18:09.659 "write_zeroes": true, 00:18:09.659 "zcopy": true, 00:18:09.659 "get_zone_info": false, 00:18:09.659 "zone_management": false, 00:18:09.659 "zone_append": false, 00:18:09.659 "compare": false, 00:18:09.659 "compare_and_write": false, 00:18:09.659 "abort": true, 00:18:09.659 "seek_hole": false, 00:18:09.659 "seek_data": false, 00:18:09.659 "copy": true, 00:18:09.659 "nvme_iov_md": false 00:18:09.659 }, 00:18:09.659 "memory_domains": [ 00:18:09.659 { 00:18:09.659 "dma_device_id": "system", 00:18:09.659 "dma_device_type": 1 00:18:09.659 }, 00:18:09.659 { 00:18:09.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.659 "dma_device_type": 2 00:18:09.659 } 00:18:09.659 ], 00:18:09.659 "driver_specific": {} 00:18:09.659 } 00:18:09.659 ] 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.659 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.918 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.918 "name": "Existed_Raid", 00:18:09.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.919 "strip_size_kb": 64, 00:18:09.919 "state": "configuring", 00:18:09.919 "raid_level": "raid0", 00:18:09.919 "superblock": false, 00:18:09.919 "num_base_bdevs": 4, 00:18:09.919 "num_base_bdevs_discovered": 3, 00:18:09.919 "num_base_bdevs_operational": 4, 00:18:09.919 "base_bdevs_list": [ 00:18:09.919 { 00:18:09.919 "name": "BaseBdev1", 00:18:09.919 "uuid": "b294906b-6e61-4d68-b55b-04a07a5a7687", 00:18:09.919 "is_configured": true, 00:18:09.919 "data_offset": 0, 00:18:09.919 "data_size": 65536 00:18:09.919 }, 00:18:09.919 { 00:18:09.919 "name": null, 00:18:09.919 "uuid": "8749105b-38fb-4cd2-9de1-69504f784d42", 00:18:09.919 "is_configured": false, 00:18:09.919 "data_offset": 0, 00:18:09.919 "data_size": 65536 00:18:09.919 }, 00:18:09.919 { 00:18:09.919 "name": "BaseBdev3", 00:18:09.919 "uuid": "21b896b7-40b8-4ea2-b1bf-968806ca6b21", 00:18:09.919 "is_configured": true, 00:18:09.919 "data_offset": 0, 00:18:09.919 "data_size": 65536 00:18:09.919 }, 00:18:09.919 { 00:18:09.919 "name": "BaseBdev4", 00:18:09.919 "uuid": "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c", 00:18:09.919 "is_configured": true, 00:18:09.919 "data_offset": 0, 00:18:09.919 "data_size": 65536 00:18:09.919 } 00:18:09.919 ] 00:18:09.919 }' 00:18:09.919 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.919 13:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.486 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.486 13:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:10.746 [2024-07-26 13:17:51.226112] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.746 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.005 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.005 "name": "Existed_Raid", 00:18:11.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.005 "strip_size_kb": 64, 00:18:11.005 "state": "configuring", 00:18:11.005 "raid_level": "raid0", 00:18:11.005 "superblock": false, 00:18:11.005 "num_base_bdevs": 4, 00:18:11.005 "num_base_bdevs_discovered": 2, 00:18:11.005 "num_base_bdevs_operational": 4, 00:18:11.005 "base_bdevs_list": [ 00:18:11.005 { 00:18:11.005 "name": "BaseBdev1", 00:18:11.005 "uuid": "b294906b-6e61-4d68-b55b-04a07a5a7687", 00:18:11.005 "is_configured": true, 00:18:11.005 "data_offset": 0, 00:18:11.005 "data_size": 65536 00:18:11.005 }, 00:18:11.005 { 00:18:11.005 "name": null, 00:18:11.005 "uuid": "8749105b-38fb-4cd2-9de1-69504f784d42", 00:18:11.005 "is_configured": false, 00:18:11.005 "data_offset": 0, 00:18:11.005 "data_size": 65536 00:18:11.005 }, 00:18:11.005 { 00:18:11.005 "name": null, 00:18:11.005 "uuid": "21b896b7-40b8-4ea2-b1bf-968806ca6b21", 00:18:11.005 "is_configured": false, 00:18:11.005 "data_offset": 0, 00:18:11.005 "data_size": 65536 00:18:11.005 }, 00:18:11.005 { 00:18:11.005 "name": "BaseBdev4", 00:18:11.005 "uuid": "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c", 00:18:11.005 "is_configured": true, 00:18:11.005 "data_offset": 0, 00:18:11.005 "data_size": 65536 00:18:11.005 } 00:18:11.005 ] 00:18:11.005 }' 00:18:11.005 13:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.005 13:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.573 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.573 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:11.831 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:11.831 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:12.090 [2024-07-26 13:17:52.461387] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:12.090 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:12.090 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.090 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:12.090 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:12.090 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.090 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:12.090 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.090 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.090 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.090 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.090 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.090 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.349 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.349 "name": "Existed_Raid", 00:18:12.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.349 "strip_size_kb": 64, 00:18:12.349 "state": "configuring", 00:18:12.349 "raid_level": "raid0", 00:18:12.349 "superblock": false, 00:18:12.349 "num_base_bdevs": 4, 00:18:12.349 "num_base_bdevs_discovered": 3, 00:18:12.349 "num_base_bdevs_operational": 4, 00:18:12.349 "base_bdevs_list": [ 00:18:12.349 { 00:18:12.349 "name": "BaseBdev1", 00:18:12.349 "uuid": "b294906b-6e61-4d68-b55b-04a07a5a7687", 00:18:12.349 "is_configured": true, 00:18:12.349 "data_offset": 0, 00:18:12.349 "data_size": 65536 00:18:12.349 }, 00:18:12.349 { 00:18:12.349 "name": null, 00:18:12.349 "uuid": "8749105b-38fb-4cd2-9de1-69504f784d42", 00:18:12.349 "is_configured": false, 00:18:12.349 "data_offset": 0, 00:18:12.349 "data_size": 65536 00:18:12.349 }, 00:18:12.349 { 00:18:12.349 "name": "BaseBdev3", 00:18:12.349 "uuid": "21b896b7-40b8-4ea2-b1bf-968806ca6b21", 00:18:12.349 "is_configured": true, 00:18:12.349 "data_offset": 0, 00:18:12.349 "data_size": 65536 00:18:12.349 }, 00:18:12.349 { 00:18:12.349 "name": "BaseBdev4", 00:18:12.349 "uuid": "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c", 00:18:12.349 "is_configured": true, 00:18:12.349 "data_offset": 0, 00:18:12.349 "data_size": 65536 00:18:12.349 } 00:18:12.349 ] 00:18:12.349 }' 00:18:12.349 13:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.349 13:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.931 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.931 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:13.199 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:13.199 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:13.458 [2024-07-26 13:17:53.728740] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:13.458 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:13.458 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:13.458 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:13.458 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:13.458 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:13.458 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:13.458 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.458 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.458 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.458 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.458 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.458 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.717 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.717 "name": "Existed_Raid", 00:18:13.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.717 "strip_size_kb": 64, 00:18:13.717 "state": "configuring", 00:18:13.717 "raid_level": "raid0", 00:18:13.717 "superblock": false, 00:18:13.717 "num_base_bdevs": 4, 00:18:13.717 "num_base_bdevs_discovered": 2, 00:18:13.717 "num_base_bdevs_operational": 4, 00:18:13.717 "base_bdevs_list": [ 00:18:13.717 { 00:18:13.717 "name": null, 00:18:13.717 "uuid": "b294906b-6e61-4d68-b55b-04a07a5a7687", 00:18:13.717 "is_configured": false, 00:18:13.717 "data_offset": 0, 00:18:13.717 "data_size": 65536 00:18:13.717 }, 00:18:13.717 { 00:18:13.717 "name": null, 00:18:13.717 "uuid": "8749105b-38fb-4cd2-9de1-69504f784d42", 00:18:13.717 "is_configured": false, 00:18:13.717 "data_offset": 0, 00:18:13.717 "data_size": 65536 00:18:13.717 }, 00:18:13.717 { 00:18:13.717 "name": "BaseBdev3", 00:18:13.717 "uuid": "21b896b7-40b8-4ea2-b1bf-968806ca6b21", 00:18:13.717 "is_configured": true, 00:18:13.717 "data_offset": 0, 00:18:13.717 "data_size": 65536 00:18:13.717 }, 00:18:13.717 { 00:18:13.717 "name": "BaseBdev4", 00:18:13.717 "uuid": "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c", 00:18:13.717 "is_configured": true, 00:18:13.717 "data_offset": 0, 00:18:13.717 "data_size": 65536 00:18:13.717 } 00:18:13.717 ] 00:18:13.717 }' 00:18:13.717 13:17:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.717 13:17:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.283 13:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.283 13:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:14.283 13:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:14.283 13:17:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:14.542 [2024-07-26 13:17:55.009780] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.542 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:14.542 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:14.542 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:14.542 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:14.542 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:14.542 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:14.542 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.542 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.542 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.542 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.542 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.542 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.803 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.803 "name": "Existed_Raid", 00:18:14.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.803 "strip_size_kb": 64, 00:18:14.803 "state": "configuring", 00:18:14.803 "raid_level": "raid0", 00:18:14.803 "superblock": false, 00:18:14.803 "num_base_bdevs": 4, 00:18:14.803 "num_base_bdevs_discovered": 3, 00:18:14.803 "num_base_bdevs_operational": 4, 00:18:14.803 "base_bdevs_list": [ 00:18:14.803 { 00:18:14.803 "name": null, 00:18:14.803 "uuid": "b294906b-6e61-4d68-b55b-04a07a5a7687", 00:18:14.803 "is_configured": false, 00:18:14.803 "data_offset": 0, 00:18:14.803 "data_size": 65536 00:18:14.803 }, 00:18:14.803 { 00:18:14.803 "name": "BaseBdev2", 00:18:14.803 "uuid": "8749105b-38fb-4cd2-9de1-69504f784d42", 00:18:14.803 "is_configured": true, 00:18:14.803 "data_offset": 0, 00:18:14.803 "data_size": 65536 00:18:14.803 }, 00:18:14.803 { 00:18:14.803 "name": "BaseBdev3", 00:18:14.803 "uuid": "21b896b7-40b8-4ea2-b1bf-968806ca6b21", 00:18:14.803 "is_configured": true, 00:18:14.803 "data_offset": 0, 00:18:14.803 "data_size": 65536 00:18:14.803 }, 00:18:14.803 { 00:18:14.803 "name": "BaseBdev4", 00:18:14.803 "uuid": "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c", 00:18:14.803 "is_configured": true, 00:18:14.803 "data_offset": 0, 00:18:14.803 "data_size": 65536 00:18:14.803 } 00:18:14.803 ] 00:18:14.803 }' 00:18:14.803 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.803 13:17:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.464 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.464 13:17:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:15.723 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:15.723 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.723 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:15.983 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b294906b-6e61-4d68-b55b-04a07a5a7687 00:18:15.983 [2024-07-26 13:17:56.480888] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:15.983 [2024-07-26 13:17:56.480924] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x15e9ad0 00:18:15.983 [2024-07-26 13:17:56.480932] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:15.983 [2024-07-26 13:17:56.481113] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17930d0 00:18:15.983 [2024-07-26 13:17:56.481233] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15e9ad0 00:18:15.983 [2024-07-26 13:17:56.481243] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15e9ad0 00:18:15.983 [2024-07-26 13:17:56.481398] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.983 NewBaseBdev 00:18:15.983 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:15.983 13:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:15.983 13:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:15.983 13:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:15.983 13:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:15.983 13:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:15.983 13:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:16.242 13:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:16.501 [ 00:18:16.501 { 00:18:16.501 "name": "NewBaseBdev", 00:18:16.501 "aliases": [ 00:18:16.501 "b294906b-6e61-4d68-b55b-04a07a5a7687" 00:18:16.501 ], 00:18:16.501 "product_name": "Malloc disk", 00:18:16.501 "block_size": 512, 00:18:16.501 "num_blocks": 65536, 00:18:16.501 "uuid": "b294906b-6e61-4d68-b55b-04a07a5a7687", 00:18:16.501 "assigned_rate_limits": { 00:18:16.501 "rw_ios_per_sec": 0, 00:18:16.501 "rw_mbytes_per_sec": 0, 00:18:16.501 "r_mbytes_per_sec": 0, 00:18:16.501 "w_mbytes_per_sec": 0 00:18:16.501 }, 00:18:16.501 "claimed": true, 00:18:16.501 "claim_type": "exclusive_write", 00:18:16.501 "zoned": false, 00:18:16.501 "supported_io_types": { 00:18:16.501 "read": true, 00:18:16.501 "write": true, 00:18:16.501 "unmap": true, 00:18:16.501 "flush": true, 00:18:16.501 "reset": true, 00:18:16.501 "nvme_admin": false, 00:18:16.501 "nvme_io": false, 00:18:16.501 "nvme_io_md": false, 00:18:16.501 "write_zeroes": true, 00:18:16.501 "zcopy": true, 00:18:16.501 "get_zone_info": false, 00:18:16.501 "zone_management": false, 00:18:16.501 "zone_append": false, 00:18:16.501 "compare": false, 00:18:16.501 "compare_and_write": false, 00:18:16.501 "abort": true, 00:18:16.501 "seek_hole": false, 00:18:16.501 "seek_data": false, 00:18:16.501 "copy": true, 00:18:16.501 "nvme_iov_md": false 00:18:16.501 }, 00:18:16.501 "memory_domains": [ 00:18:16.501 { 00:18:16.501 "dma_device_id": "system", 00:18:16.501 "dma_device_type": 1 00:18:16.501 }, 00:18:16.501 { 00:18:16.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.501 "dma_device_type": 2 00:18:16.501 } 00:18:16.501 ], 00:18:16.501 "driver_specific": {} 00:18:16.501 } 00:18:16.501 ] 00:18:16.501 13:17:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:16.501 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:16.501 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:16.501 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:16.501 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:16.501 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:16.501 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:16.502 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.502 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.502 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.502 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.502 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.502 13:17:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.761 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.761 "name": "Existed_Raid", 00:18:16.761 "uuid": "f2d29523-ddf3-4be7-b0b9-e239e96d347d", 00:18:16.761 "strip_size_kb": 64, 00:18:16.761 "state": "online", 00:18:16.761 "raid_level": "raid0", 00:18:16.761 "superblock": false, 00:18:16.761 "num_base_bdevs": 4, 00:18:16.761 "num_base_bdevs_discovered": 4, 00:18:16.761 "num_base_bdevs_operational": 4, 00:18:16.761 "base_bdevs_list": [ 00:18:16.761 { 00:18:16.761 "name": "NewBaseBdev", 00:18:16.761 "uuid": "b294906b-6e61-4d68-b55b-04a07a5a7687", 00:18:16.761 "is_configured": true, 00:18:16.761 "data_offset": 0, 00:18:16.761 "data_size": 65536 00:18:16.761 }, 00:18:16.761 { 00:18:16.761 "name": "BaseBdev2", 00:18:16.761 "uuid": "8749105b-38fb-4cd2-9de1-69504f784d42", 00:18:16.761 "is_configured": true, 00:18:16.761 "data_offset": 0, 00:18:16.761 "data_size": 65536 00:18:16.761 }, 00:18:16.761 { 00:18:16.761 "name": "BaseBdev3", 00:18:16.761 "uuid": "21b896b7-40b8-4ea2-b1bf-968806ca6b21", 00:18:16.761 "is_configured": true, 00:18:16.761 "data_offset": 0, 00:18:16.761 "data_size": 65536 00:18:16.761 }, 00:18:16.761 { 00:18:16.761 "name": "BaseBdev4", 00:18:16.761 "uuid": "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c", 00:18:16.761 "is_configured": true, 00:18:16.761 "data_offset": 0, 00:18:16.761 "data_size": 65536 00:18:16.761 } 00:18:16.761 ] 00:18:16.761 }' 00:18:16.761 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.761 13:17:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.330 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:17.330 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:17.330 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:17.330 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:17.330 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:17.330 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:17.330 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:17.330 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:17.330 [2024-07-26 13:17:57.816840] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.330 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:17.330 "name": "Existed_Raid", 00:18:17.330 "aliases": [ 00:18:17.330 "f2d29523-ddf3-4be7-b0b9-e239e96d347d" 00:18:17.330 ], 00:18:17.330 "product_name": "Raid Volume", 00:18:17.330 "block_size": 512, 00:18:17.330 "num_blocks": 262144, 00:18:17.330 "uuid": "f2d29523-ddf3-4be7-b0b9-e239e96d347d", 00:18:17.330 "assigned_rate_limits": { 00:18:17.330 "rw_ios_per_sec": 0, 00:18:17.330 "rw_mbytes_per_sec": 0, 00:18:17.330 "r_mbytes_per_sec": 0, 00:18:17.330 "w_mbytes_per_sec": 0 00:18:17.330 }, 00:18:17.330 "claimed": false, 00:18:17.330 "zoned": false, 00:18:17.330 "supported_io_types": { 00:18:17.330 "read": true, 00:18:17.330 "write": true, 00:18:17.330 "unmap": true, 00:18:17.330 "flush": true, 00:18:17.330 "reset": true, 00:18:17.330 "nvme_admin": false, 00:18:17.330 "nvme_io": false, 00:18:17.330 "nvme_io_md": false, 00:18:17.330 "write_zeroes": true, 00:18:17.330 "zcopy": false, 00:18:17.330 "get_zone_info": false, 00:18:17.330 "zone_management": false, 00:18:17.330 "zone_append": false, 00:18:17.330 "compare": false, 00:18:17.330 "compare_and_write": false, 00:18:17.330 "abort": false, 00:18:17.330 "seek_hole": false, 00:18:17.330 "seek_data": false, 00:18:17.330 "copy": false, 00:18:17.330 "nvme_iov_md": false 00:18:17.330 }, 00:18:17.330 "memory_domains": [ 00:18:17.330 { 00:18:17.330 "dma_device_id": "system", 00:18:17.330 "dma_device_type": 1 00:18:17.330 }, 00:18:17.330 { 00:18:17.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.330 "dma_device_type": 2 00:18:17.330 }, 00:18:17.330 { 00:18:17.330 "dma_device_id": "system", 00:18:17.330 "dma_device_type": 1 00:18:17.330 }, 00:18:17.330 { 00:18:17.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.330 "dma_device_type": 2 00:18:17.330 }, 00:18:17.330 { 00:18:17.330 "dma_device_id": "system", 00:18:17.330 "dma_device_type": 1 00:18:17.330 }, 00:18:17.330 { 00:18:17.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.330 "dma_device_type": 2 00:18:17.330 }, 00:18:17.330 { 00:18:17.330 "dma_device_id": "system", 00:18:17.330 "dma_device_type": 1 00:18:17.330 }, 00:18:17.330 { 00:18:17.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.330 "dma_device_type": 2 00:18:17.330 } 00:18:17.330 ], 00:18:17.330 "driver_specific": { 00:18:17.330 "raid": { 00:18:17.330 "uuid": "f2d29523-ddf3-4be7-b0b9-e239e96d347d", 00:18:17.330 "strip_size_kb": 64, 00:18:17.330 "state": "online", 00:18:17.330 "raid_level": "raid0", 00:18:17.330 "superblock": false, 00:18:17.330 "num_base_bdevs": 4, 00:18:17.330 "num_base_bdevs_discovered": 4, 00:18:17.330 "num_base_bdevs_operational": 4, 00:18:17.330 "base_bdevs_list": [ 00:18:17.330 { 00:18:17.330 "name": "NewBaseBdev", 00:18:17.330 "uuid": "b294906b-6e61-4d68-b55b-04a07a5a7687", 00:18:17.330 "is_configured": true, 00:18:17.330 "data_offset": 0, 00:18:17.330 "data_size": 65536 00:18:17.330 }, 00:18:17.330 { 00:18:17.330 "name": "BaseBdev2", 00:18:17.330 "uuid": "8749105b-38fb-4cd2-9de1-69504f784d42", 00:18:17.330 "is_configured": true, 00:18:17.330 "data_offset": 0, 00:18:17.330 "data_size": 65536 00:18:17.330 }, 00:18:17.330 { 00:18:17.330 "name": "BaseBdev3", 00:18:17.330 "uuid": "21b896b7-40b8-4ea2-b1bf-968806ca6b21", 00:18:17.330 "is_configured": true, 00:18:17.330 "data_offset": 0, 00:18:17.330 "data_size": 65536 00:18:17.330 }, 00:18:17.330 { 00:18:17.330 "name": "BaseBdev4", 00:18:17.330 "uuid": "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c", 00:18:17.330 "is_configured": true, 00:18:17.330 "data_offset": 0, 00:18:17.330 "data_size": 65536 00:18:17.330 } 00:18:17.330 ] 00:18:17.330 } 00:18:17.330 } 00:18:17.330 }' 00:18:17.330 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:17.590 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:17.590 BaseBdev2 00:18:17.590 BaseBdev3 00:18:17.590 BaseBdev4' 00:18:17.590 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:17.590 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:17.590 13:17:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:17.590 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:17.590 "name": "NewBaseBdev", 00:18:17.590 "aliases": [ 00:18:17.590 "b294906b-6e61-4d68-b55b-04a07a5a7687" 00:18:17.590 ], 00:18:17.590 "product_name": "Malloc disk", 00:18:17.591 "block_size": 512, 00:18:17.591 "num_blocks": 65536, 00:18:17.591 "uuid": "b294906b-6e61-4d68-b55b-04a07a5a7687", 00:18:17.591 "assigned_rate_limits": { 00:18:17.591 "rw_ios_per_sec": 0, 00:18:17.591 "rw_mbytes_per_sec": 0, 00:18:17.591 "r_mbytes_per_sec": 0, 00:18:17.591 "w_mbytes_per_sec": 0 00:18:17.591 }, 00:18:17.591 "claimed": true, 00:18:17.591 "claim_type": "exclusive_write", 00:18:17.591 "zoned": false, 00:18:17.591 "supported_io_types": { 00:18:17.591 "read": true, 00:18:17.591 "write": true, 00:18:17.591 "unmap": true, 00:18:17.591 "flush": true, 00:18:17.591 "reset": true, 00:18:17.591 "nvme_admin": false, 00:18:17.591 "nvme_io": false, 00:18:17.591 "nvme_io_md": false, 00:18:17.591 "write_zeroes": true, 00:18:17.591 "zcopy": true, 00:18:17.591 "get_zone_info": false, 00:18:17.591 "zone_management": false, 00:18:17.591 "zone_append": false, 00:18:17.591 "compare": false, 00:18:17.591 "compare_and_write": false, 00:18:17.591 "abort": true, 00:18:17.591 "seek_hole": false, 00:18:17.591 "seek_data": false, 00:18:17.591 "copy": true, 00:18:17.591 "nvme_iov_md": false 00:18:17.591 }, 00:18:17.591 "memory_domains": [ 00:18:17.591 { 00:18:17.591 "dma_device_id": "system", 00:18:17.591 "dma_device_type": 1 00:18:17.591 }, 00:18:17.591 { 00:18:17.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.591 "dma_device_type": 2 00:18:17.591 } 00:18:17.591 ], 00:18:17.591 "driver_specific": {} 00:18:17.591 }' 00:18:17.591 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:17.850 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:17.850 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:17.850 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:17.850 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:17.850 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:17.850 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:17.850 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:17.850 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:17.850 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.109 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.109 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:18.109 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:18.109 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:18.109 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:18.369 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:18.369 "name": "BaseBdev2", 00:18:18.369 "aliases": [ 00:18:18.369 "8749105b-38fb-4cd2-9de1-69504f784d42" 00:18:18.369 ], 00:18:18.369 "product_name": "Malloc disk", 00:18:18.369 "block_size": 512, 00:18:18.369 "num_blocks": 65536, 00:18:18.369 "uuid": "8749105b-38fb-4cd2-9de1-69504f784d42", 00:18:18.369 "assigned_rate_limits": { 00:18:18.369 "rw_ios_per_sec": 0, 00:18:18.369 "rw_mbytes_per_sec": 0, 00:18:18.369 "r_mbytes_per_sec": 0, 00:18:18.369 "w_mbytes_per_sec": 0 00:18:18.369 }, 00:18:18.369 "claimed": true, 00:18:18.369 "claim_type": "exclusive_write", 00:18:18.369 "zoned": false, 00:18:18.369 "supported_io_types": { 00:18:18.369 "read": true, 00:18:18.369 "write": true, 00:18:18.369 "unmap": true, 00:18:18.369 "flush": true, 00:18:18.369 "reset": true, 00:18:18.369 "nvme_admin": false, 00:18:18.369 "nvme_io": false, 00:18:18.369 "nvme_io_md": false, 00:18:18.369 "write_zeroes": true, 00:18:18.369 "zcopy": true, 00:18:18.369 "get_zone_info": false, 00:18:18.369 "zone_management": false, 00:18:18.369 "zone_append": false, 00:18:18.369 "compare": false, 00:18:18.369 "compare_and_write": false, 00:18:18.369 "abort": true, 00:18:18.369 "seek_hole": false, 00:18:18.369 "seek_data": false, 00:18:18.369 "copy": true, 00:18:18.369 "nvme_iov_md": false 00:18:18.369 }, 00:18:18.369 "memory_domains": [ 00:18:18.369 { 00:18:18.369 "dma_device_id": "system", 00:18:18.369 "dma_device_type": 1 00:18:18.369 }, 00:18:18.369 { 00:18:18.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.369 "dma_device_type": 2 00:18:18.369 } 00:18:18.369 ], 00:18:18.369 "driver_specific": {} 00:18:18.369 }' 00:18:18.369 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.369 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.369 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:18.369 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.369 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.369 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:18.369 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.369 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:18.628 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:18.628 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.628 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:18.628 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:18.628 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:18.628 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:18.628 13:17:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:18.888 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:18.888 "name": "BaseBdev3", 00:18:18.888 "aliases": [ 00:18:18.888 "21b896b7-40b8-4ea2-b1bf-968806ca6b21" 00:18:18.888 ], 00:18:18.888 "product_name": "Malloc disk", 00:18:18.888 "block_size": 512, 00:18:18.888 "num_blocks": 65536, 00:18:18.888 "uuid": "21b896b7-40b8-4ea2-b1bf-968806ca6b21", 00:18:18.888 "assigned_rate_limits": { 00:18:18.888 "rw_ios_per_sec": 0, 00:18:18.888 "rw_mbytes_per_sec": 0, 00:18:18.888 "r_mbytes_per_sec": 0, 00:18:18.888 "w_mbytes_per_sec": 0 00:18:18.888 }, 00:18:18.888 "claimed": true, 00:18:18.888 "claim_type": "exclusive_write", 00:18:18.888 "zoned": false, 00:18:18.888 "supported_io_types": { 00:18:18.888 "read": true, 00:18:18.888 "write": true, 00:18:18.888 "unmap": true, 00:18:18.888 "flush": true, 00:18:18.888 "reset": true, 00:18:18.888 "nvme_admin": false, 00:18:18.888 "nvme_io": false, 00:18:18.888 "nvme_io_md": false, 00:18:18.888 "write_zeroes": true, 00:18:18.888 "zcopy": true, 00:18:18.888 "get_zone_info": false, 00:18:18.888 "zone_management": false, 00:18:18.888 "zone_append": false, 00:18:18.888 "compare": false, 00:18:18.888 "compare_and_write": false, 00:18:18.888 "abort": true, 00:18:18.888 "seek_hole": false, 00:18:18.888 "seek_data": false, 00:18:18.888 "copy": true, 00:18:18.888 "nvme_iov_md": false 00:18:18.888 }, 00:18:18.888 "memory_domains": [ 00:18:18.888 { 00:18:18.888 "dma_device_id": "system", 00:18:18.888 "dma_device_type": 1 00:18:18.888 }, 00:18:18.888 { 00:18:18.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.888 "dma_device_type": 2 00:18:18.888 } 00:18:18.888 ], 00:18:18.888 "driver_specific": {} 00:18:18.888 }' 00:18:18.888 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.888 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.888 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:18.888 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.888 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:18.888 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:18.888 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.147 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.147 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.147 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.147 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.147 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.147 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:19.147 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:19.147 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.407 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.407 "name": "BaseBdev4", 00:18:19.407 "aliases": [ 00:18:19.407 "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c" 00:18:19.407 ], 00:18:19.407 "product_name": "Malloc disk", 00:18:19.407 "block_size": 512, 00:18:19.407 "num_blocks": 65536, 00:18:19.407 "uuid": "b18d0919-fc2b-4c31-9b7b-e386e5afaf7c", 00:18:19.407 "assigned_rate_limits": { 00:18:19.407 "rw_ios_per_sec": 0, 00:18:19.407 "rw_mbytes_per_sec": 0, 00:18:19.407 "r_mbytes_per_sec": 0, 00:18:19.407 "w_mbytes_per_sec": 0 00:18:19.407 }, 00:18:19.407 "claimed": true, 00:18:19.407 "claim_type": "exclusive_write", 00:18:19.407 "zoned": false, 00:18:19.407 "supported_io_types": { 00:18:19.407 "read": true, 00:18:19.407 "write": true, 00:18:19.407 "unmap": true, 00:18:19.407 "flush": true, 00:18:19.407 "reset": true, 00:18:19.407 "nvme_admin": false, 00:18:19.407 "nvme_io": false, 00:18:19.407 "nvme_io_md": false, 00:18:19.407 "write_zeroes": true, 00:18:19.407 "zcopy": true, 00:18:19.407 "get_zone_info": false, 00:18:19.407 "zone_management": false, 00:18:19.407 "zone_append": false, 00:18:19.407 "compare": false, 00:18:19.407 "compare_and_write": false, 00:18:19.407 "abort": true, 00:18:19.407 "seek_hole": false, 00:18:19.407 "seek_data": false, 00:18:19.407 "copy": true, 00:18:19.407 "nvme_iov_md": false 00:18:19.407 }, 00:18:19.407 "memory_domains": [ 00:18:19.407 { 00:18:19.407 "dma_device_id": "system", 00:18:19.407 "dma_device_type": 1 00:18:19.407 }, 00:18:19.407 { 00:18:19.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.407 "dma_device_type": 2 00:18:19.407 } 00:18:19.407 ], 00:18:19.407 "driver_specific": {} 00:18:19.407 }' 00:18:19.407 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.407 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.407 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.407 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.407 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.667 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:19.667 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.667 13:17:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.667 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.667 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.667 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.667 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.667 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:19.926 [2024-07-26 13:18:00.295067] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:19.926 [2024-07-26 13:18:00.295093] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:19.926 [2024-07-26 13:18:00.295160] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:19.926 [2024-07-26 13:18:00.295218] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:19.927 [2024-07-26 13:18:00.295229] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15e9ad0 name Existed_Raid, state offline 00:18:19.927 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 727263 00:18:19.927 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 727263 ']' 00:18:19.927 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 727263 00:18:19.927 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:18:19.927 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:19.927 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 727263 00:18:19.927 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:19.927 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:19.927 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 727263' 00:18:19.927 killing process with pid 727263 00:18:19.927 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 727263 00:18:19.927 [2024-07-26 13:18:00.373300] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:19.927 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 727263 00:18:19.927 [2024-07-26 13:18:00.404527] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:20.187 00:18:20.187 real 0m30.129s 00:18:20.187 user 0m55.247s 00:18:20.187 sys 0m5.406s 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.187 ************************************ 00:18:20.187 END TEST raid_state_function_test 00:18:20.187 ************************************ 00:18:20.187 13:18:00 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:20.187 13:18:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:20.187 13:18:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:20.187 13:18:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.187 ************************************ 00:18:20.187 START TEST raid_state_function_test_sb 00:18:20.187 ************************************ 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=733068 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 733068' 00:18:20.187 Process raid pid: 733068 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 733068 /var/tmp/spdk-raid.sock 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 733068 ']' 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:20.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.187 13:18:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.447 [2024-07-26 13:18:00.748547] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:18:20.447 [2024-07-26 13:18:00.748607] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:20.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:20.447 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:20.447 [2024-07-26 13:18:00.882218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.447 [2024-07-26 13:18:00.971238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.706 [2024-07-26 13:18:01.030006] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.706 [2024-07-26 13:18:01.030030] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:21.274 13:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.274 13:18:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:18:21.274 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:21.533 [2024-07-26 13:18:01.855867] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:21.533 [2024-07-26 13:18:01.855906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:21.533 [2024-07-26 13:18:01.855917] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.533 [2024-07-26 13:18:01.855928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.533 [2024-07-26 13:18:01.855936] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:21.533 [2024-07-26 13:18:01.855946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:21.533 [2024-07-26 13:18:01.855954] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:21.533 [2024-07-26 13:18:01.855964] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:21.533 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:21.533 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:21.533 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:21.533 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:21.533 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:21.533 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:21.533 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.533 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.533 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.533 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.533 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.533 13:18:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.792 13:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.792 "name": "Existed_Raid", 00:18:21.792 "uuid": "77ed36b5-62dd-4db8-9563-59bf3cd9d48e", 00:18:21.792 "strip_size_kb": 64, 00:18:21.792 "state": "configuring", 00:18:21.792 "raid_level": "raid0", 00:18:21.792 "superblock": true, 00:18:21.792 "num_base_bdevs": 4, 00:18:21.792 "num_base_bdevs_discovered": 0, 00:18:21.792 "num_base_bdevs_operational": 4, 00:18:21.792 "base_bdevs_list": [ 00:18:21.792 { 00:18:21.792 "name": "BaseBdev1", 00:18:21.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.792 "is_configured": false, 00:18:21.792 "data_offset": 0, 00:18:21.792 "data_size": 0 00:18:21.792 }, 00:18:21.792 { 00:18:21.792 "name": "BaseBdev2", 00:18:21.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.792 "is_configured": false, 00:18:21.792 "data_offset": 0, 00:18:21.792 "data_size": 0 00:18:21.792 }, 00:18:21.792 { 00:18:21.792 "name": "BaseBdev3", 00:18:21.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.792 "is_configured": false, 00:18:21.792 "data_offset": 0, 00:18:21.792 "data_size": 0 00:18:21.792 }, 00:18:21.792 { 00:18:21.792 "name": "BaseBdev4", 00:18:21.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.792 "is_configured": false, 00:18:21.792 "data_offset": 0, 00:18:21.792 "data_size": 0 00:18:21.792 } 00:18:21.792 ] 00:18:21.792 }' 00:18:21.792 13:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.792 13:18:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.359 13:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:22.359 [2024-07-26 13:18:02.818256] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:22.359 [2024-07-26 13:18:02.818289] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15ebf60 name Existed_Raid, state configuring 00:18:22.359 13:18:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:22.619 [2024-07-26 13:18:03.042877] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:22.619 [2024-07-26 13:18:03.042906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:22.619 [2024-07-26 13:18:03.042915] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:22.619 [2024-07-26 13:18:03.042926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:22.619 [2024-07-26 13:18:03.042934] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:22.619 [2024-07-26 13:18:03.042944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:22.619 [2024-07-26 13:18:03.042952] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:22.619 [2024-07-26 13:18:03.042962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:22.619 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:22.878 [2024-07-26 13:18:03.265032] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:22.878 BaseBdev1 00:18:22.878 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:22.878 13:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:22.878 13:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:22.878 13:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:22.878 13:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:22.878 13:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:22.878 13:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:23.137 13:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:23.396 [ 00:18:23.396 { 00:18:23.396 "name": "BaseBdev1", 00:18:23.396 "aliases": [ 00:18:23.396 "bedee31b-21c0-4837-bbec-01ce33804a5a" 00:18:23.396 ], 00:18:23.396 "product_name": "Malloc disk", 00:18:23.396 "block_size": 512, 00:18:23.396 "num_blocks": 65536, 00:18:23.396 "uuid": "bedee31b-21c0-4837-bbec-01ce33804a5a", 00:18:23.396 "assigned_rate_limits": { 00:18:23.396 "rw_ios_per_sec": 0, 00:18:23.396 "rw_mbytes_per_sec": 0, 00:18:23.396 "r_mbytes_per_sec": 0, 00:18:23.396 "w_mbytes_per_sec": 0 00:18:23.396 }, 00:18:23.396 "claimed": true, 00:18:23.396 "claim_type": "exclusive_write", 00:18:23.396 "zoned": false, 00:18:23.396 "supported_io_types": { 00:18:23.396 "read": true, 00:18:23.396 "write": true, 00:18:23.396 "unmap": true, 00:18:23.396 "flush": true, 00:18:23.396 "reset": true, 00:18:23.396 "nvme_admin": false, 00:18:23.396 "nvme_io": false, 00:18:23.396 "nvme_io_md": false, 00:18:23.396 "write_zeroes": true, 00:18:23.396 "zcopy": true, 00:18:23.396 "get_zone_info": false, 00:18:23.396 "zone_management": false, 00:18:23.396 "zone_append": false, 00:18:23.396 "compare": false, 00:18:23.396 "compare_and_write": false, 00:18:23.396 "abort": true, 00:18:23.396 "seek_hole": false, 00:18:23.396 "seek_data": false, 00:18:23.396 "copy": true, 00:18:23.396 "nvme_iov_md": false 00:18:23.396 }, 00:18:23.396 "memory_domains": [ 00:18:23.396 { 00:18:23.396 "dma_device_id": "system", 00:18:23.396 "dma_device_type": 1 00:18:23.396 }, 00:18:23.396 { 00:18:23.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.396 "dma_device_type": 2 00:18:23.396 } 00:18:23.396 ], 00:18:23.396 "driver_specific": {} 00:18:23.396 } 00:18:23.396 ] 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.396 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.655 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.655 "name": "Existed_Raid", 00:18:23.655 "uuid": "cac3981d-73f8-4c35-83a7-4704b680b7e6", 00:18:23.655 "strip_size_kb": 64, 00:18:23.655 "state": "configuring", 00:18:23.655 "raid_level": "raid0", 00:18:23.655 "superblock": true, 00:18:23.655 "num_base_bdevs": 4, 00:18:23.655 "num_base_bdevs_discovered": 1, 00:18:23.655 "num_base_bdevs_operational": 4, 00:18:23.655 "base_bdevs_list": [ 00:18:23.655 { 00:18:23.655 "name": "BaseBdev1", 00:18:23.655 "uuid": "bedee31b-21c0-4837-bbec-01ce33804a5a", 00:18:23.655 "is_configured": true, 00:18:23.655 "data_offset": 2048, 00:18:23.655 "data_size": 63488 00:18:23.655 }, 00:18:23.655 { 00:18:23.655 "name": "BaseBdev2", 00:18:23.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.655 "is_configured": false, 00:18:23.655 "data_offset": 0, 00:18:23.655 "data_size": 0 00:18:23.655 }, 00:18:23.655 { 00:18:23.655 "name": "BaseBdev3", 00:18:23.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.655 "is_configured": false, 00:18:23.655 "data_offset": 0, 00:18:23.655 "data_size": 0 00:18:23.655 }, 00:18:23.655 { 00:18:23.655 "name": "BaseBdev4", 00:18:23.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.655 "is_configured": false, 00:18:23.655 "data_offset": 0, 00:18:23.655 "data_size": 0 00:18:23.655 } 00:18:23.655 ] 00:18:23.656 }' 00:18:23.656 13:18:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.656 13:18:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.224 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:24.224 [2024-07-26 13:18:04.729022] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:24.224 [2024-07-26 13:18:04.729059] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15eb7d0 name Existed_Raid, state configuring 00:18:24.224 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:24.483 [2024-07-26 13:18:04.961685] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.483 [2024-07-26 13:18:04.963054] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.483 [2024-07-26 13:18:04.963086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.483 [2024-07-26 13:18:04.963096] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:24.483 [2024-07-26 13:18:04.963107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:24.483 [2024-07-26 13:18:04.963116] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:24.483 [2024-07-26 13:18:04.963126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.483 13:18:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.742 13:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.742 "name": "Existed_Raid", 00:18:24.742 "uuid": "c58ca326-1614-40f1-b65d-bec341ba2a3e", 00:18:24.742 "strip_size_kb": 64, 00:18:24.742 "state": "configuring", 00:18:24.742 "raid_level": "raid0", 00:18:24.742 "superblock": true, 00:18:24.742 "num_base_bdevs": 4, 00:18:24.742 "num_base_bdevs_discovered": 1, 00:18:24.742 "num_base_bdevs_operational": 4, 00:18:24.742 "base_bdevs_list": [ 00:18:24.742 { 00:18:24.742 "name": "BaseBdev1", 00:18:24.742 "uuid": "bedee31b-21c0-4837-bbec-01ce33804a5a", 00:18:24.742 "is_configured": true, 00:18:24.742 "data_offset": 2048, 00:18:24.742 "data_size": 63488 00:18:24.742 }, 00:18:24.742 { 00:18:24.742 "name": "BaseBdev2", 00:18:24.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.742 "is_configured": false, 00:18:24.742 "data_offset": 0, 00:18:24.742 "data_size": 0 00:18:24.742 }, 00:18:24.742 { 00:18:24.742 "name": "BaseBdev3", 00:18:24.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.742 "is_configured": false, 00:18:24.742 "data_offset": 0, 00:18:24.742 "data_size": 0 00:18:24.742 }, 00:18:24.742 { 00:18:24.742 "name": "BaseBdev4", 00:18:24.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.742 "is_configured": false, 00:18:24.742 "data_offset": 0, 00:18:24.742 "data_size": 0 00:18:24.742 } 00:18:24.742 ] 00:18:24.742 }' 00:18:24.742 13:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.742 13:18:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.310 13:18:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:25.878 [2024-07-26 13:18:06.272212] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.878 BaseBdev2 00:18:25.878 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:25.878 13:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:25.878 13:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:25.878 13:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:25.878 13:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:25.878 13:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:25.878 13:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.137 13:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:26.396 [ 00:18:26.396 { 00:18:26.396 "name": "BaseBdev2", 00:18:26.396 "aliases": [ 00:18:26.396 "fd6c4e44-554a-434a-ab4c-cb3344c50a94" 00:18:26.396 ], 00:18:26.396 "product_name": "Malloc disk", 00:18:26.396 "block_size": 512, 00:18:26.396 "num_blocks": 65536, 00:18:26.396 "uuid": "fd6c4e44-554a-434a-ab4c-cb3344c50a94", 00:18:26.396 "assigned_rate_limits": { 00:18:26.396 "rw_ios_per_sec": 0, 00:18:26.396 "rw_mbytes_per_sec": 0, 00:18:26.396 "r_mbytes_per_sec": 0, 00:18:26.396 "w_mbytes_per_sec": 0 00:18:26.396 }, 00:18:26.396 "claimed": true, 00:18:26.396 "claim_type": "exclusive_write", 00:18:26.396 "zoned": false, 00:18:26.396 "supported_io_types": { 00:18:26.396 "read": true, 00:18:26.396 "write": true, 00:18:26.396 "unmap": true, 00:18:26.396 "flush": true, 00:18:26.396 "reset": true, 00:18:26.396 "nvme_admin": false, 00:18:26.396 "nvme_io": false, 00:18:26.396 "nvme_io_md": false, 00:18:26.396 "write_zeroes": true, 00:18:26.396 "zcopy": true, 00:18:26.396 "get_zone_info": false, 00:18:26.396 "zone_management": false, 00:18:26.396 "zone_append": false, 00:18:26.396 "compare": false, 00:18:26.396 "compare_and_write": false, 00:18:26.396 "abort": true, 00:18:26.396 "seek_hole": false, 00:18:26.396 "seek_data": false, 00:18:26.396 "copy": true, 00:18:26.396 "nvme_iov_md": false 00:18:26.396 }, 00:18:26.396 "memory_domains": [ 00:18:26.396 { 00:18:26.396 "dma_device_id": "system", 00:18:26.396 "dma_device_type": 1 00:18:26.396 }, 00:18:26.396 { 00:18:26.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.396 "dma_device_type": 2 00:18:26.396 } 00:18:26.396 ], 00:18:26.396 "driver_specific": {} 00:18:26.396 } 00:18:26.396 ] 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.396 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.655 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:26.655 "name": "Existed_Raid", 00:18:26.656 "uuid": "c58ca326-1614-40f1-b65d-bec341ba2a3e", 00:18:26.656 "strip_size_kb": 64, 00:18:26.656 "state": "configuring", 00:18:26.656 "raid_level": "raid0", 00:18:26.656 "superblock": true, 00:18:26.656 "num_base_bdevs": 4, 00:18:26.656 "num_base_bdevs_discovered": 2, 00:18:26.656 "num_base_bdevs_operational": 4, 00:18:26.656 "base_bdevs_list": [ 00:18:26.656 { 00:18:26.656 "name": "BaseBdev1", 00:18:26.656 "uuid": "bedee31b-21c0-4837-bbec-01ce33804a5a", 00:18:26.656 "is_configured": true, 00:18:26.656 "data_offset": 2048, 00:18:26.656 "data_size": 63488 00:18:26.656 }, 00:18:26.656 { 00:18:26.656 "name": "BaseBdev2", 00:18:26.656 "uuid": "fd6c4e44-554a-434a-ab4c-cb3344c50a94", 00:18:26.656 "is_configured": true, 00:18:26.656 "data_offset": 2048, 00:18:26.656 "data_size": 63488 00:18:26.656 }, 00:18:26.656 { 00:18:26.656 "name": "BaseBdev3", 00:18:26.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.656 "is_configured": false, 00:18:26.656 "data_offset": 0, 00:18:26.656 "data_size": 0 00:18:26.656 }, 00:18:26.656 { 00:18:26.656 "name": "BaseBdev4", 00:18:26.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.656 "is_configured": false, 00:18:26.656 "data_offset": 0, 00:18:26.656 "data_size": 0 00:18:26.656 } 00:18:26.656 ] 00:18:26.656 }' 00:18:26.656 13:18:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:26.656 13:18:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.224 13:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:27.483 [2024-07-26 13:18:07.855515] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:27.483 BaseBdev3 00:18:27.483 13:18:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:27.483 13:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:27.483 13:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:27.483 13:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:27.483 13:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:27.483 13:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:27.483 13:18:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:27.742 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:28.002 [ 00:18:28.002 { 00:18:28.002 "name": "BaseBdev3", 00:18:28.002 "aliases": [ 00:18:28.002 "e6b119c1-53d6-4b2b-9ebc-9cac2aa1f046" 00:18:28.002 ], 00:18:28.002 "product_name": "Malloc disk", 00:18:28.002 "block_size": 512, 00:18:28.002 "num_blocks": 65536, 00:18:28.002 "uuid": "e6b119c1-53d6-4b2b-9ebc-9cac2aa1f046", 00:18:28.002 "assigned_rate_limits": { 00:18:28.002 "rw_ios_per_sec": 0, 00:18:28.002 "rw_mbytes_per_sec": 0, 00:18:28.002 "r_mbytes_per_sec": 0, 00:18:28.002 "w_mbytes_per_sec": 0 00:18:28.002 }, 00:18:28.002 "claimed": true, 00:18:28.002 "claim_type": "exclusive_write", 00:18:28.002 "zoned": false, 00:18:28.002 "supported_io_types": { 00:18:28.002 "read": true, 00:18:28.002 "write": true, 00:18:28.002 "unmap": true, 00:18:28.002 "flush": true, 00:18:28.002 "reset": true, 00:18:28.002 "nvme_admin": false, 00:18:28.002 "nvme_io": false, 00:18:28.002 "nvme_io_md": false, 00:18:28.002 "write_zeroes": true, 00:18:28.002 "zcopy": true, 00:18:28.002 "get_zone_info": false, 00:18:28.002 "zone_management": false, 00:18:28.002 "zone_append": false, 00:18:28.002 "compare": false, 00:18:28.002 "compare_and_write": false, 00:18:28.002 "abort": true, 00:18:28.002 "seek_hole": false, 00:18:28.002 "seek_data": false, 00:18:28.002 "copy": true, 00:18:28.002 "nvme_iov_md": false 00:18:28.002 }, 00:18:28.002 "memory_domains": [ 00:18:28.002 { 00:18:28.002 "dma_device_id": "system", 00:18:28.002 "dma_device_type": 1 00:18:28.002 }, 00:18:28.002 { 00:18:28.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.002 "dma_device_type": 2 00:18:28.002 } 00:18:28.002 ], 00:18:28.002 "driver_specific": {} 00:18:28.002 } 00:18:28.002 ] 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.002 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.261 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.261 "name": "Existed_Raid", 00:18:28.261 "uuid": "c58ca326-1614-40f1-b65d-bec341ba2a3e", 00:18:28.261 "strip_size_kb": 64, 00:18:28.261 "state": "configuring", 00:18:28.261 "raid_level": "raid0", 00:18:28.261 "superblock": true, 00:18:28.261 "num_base_bdevs": 4, 00:18:28.261 "num_base_bdevs_discovered": 3, 00:18:28.261 "num_base_bdevs_operational": 4, 00:18:28.261 "base_bdevs_list": [ 00:18:28.261 { 00:18:28.261 "name": "BaseBdev1", 00:18:28.261 "uuid": "bedee31b-21c0-4837-bbec-01ce33804a5a", 00:18:28.261 "is_configured": true, 00:18:28.261 "data_offset": 2048, 00:18:28.261 "data_size": 63488 00:18:28.261 }, 00:18:28.261 { 00:18:28.261 "name": "BaseBdev2", 00:18:28.261 "uuid": "fd6c4e44-554a-434a-ab4c-cb3344c50a94", 00:18:28.261 "is_configured": true, 00:18:28.261 "data_offset": 2048, 00:18:28.261 "data_size": 63488 00:18:28.261 }, 00:18:28.261 { 00:18:28.261 "name": "BaseBdev3", 00:18:28.261 "uuid": "e6b119c1-53d6-4b2b-9ebc-9cac2aa1f046", 00:18:28.261 "is_configured": true, 00:18:28.261 "data_offset": 2048, 00:18:28.262 "data_size": 63488 00:18:28.262 }, 00:18:28.262 { 00:18:28.262 "name": "BaseBdev4", 00:18:28.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.262 "is_configured": false, 00:18:28.262 "data_offset": 0, 00:18:28.262 "data_size": 0 00:18:28.262 } 00:18:28.262 ] 00:18:28.262 }' 00:18:28.262 13:18:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.262 13:18:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.829 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:28.829 [2024-07-26 13:18:09.286445] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:28.829 [2024-07-26 13:18:09.286595] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x15ec840 00:18:28.829 [2024-07-26 13:18:09.286608] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:28.829 [2024-07-26 13:18:09.286766] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15ec480 00:18:28.829 [2024-07-26 13:18:09.286879] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15ec840 00:18:28.829 [2024-07-26 13:18:09.286888] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15ec840 00:18:28.830 [2024-07-26 13:18:09.286971] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.830 BaseBdev4 00:18:28.830 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:18:28.830 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:28.830 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:28.830 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:28.830 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:28.830 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:28.830 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.118 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:29.377 [ 00:18:29.377 { 00:18:29.377 "name": "BaseBdev4", 00:18:29.377 "aliases": [ 00:18:29.377 "b1b1ca84-7080-4f5f-816a-fa8495f53cef" 00:18:29.377 ], 00:18:29.377 "product_name": "Malloc disk", 00:18:29.377 "block_size": 512, 00:18:29.377 "num_blocks": 65536, 00:18:29.377 "uuid": "b1b1ca84-7080-4f5f-816a-fa8495f53cef", 00:18:29.377 "assigned_rate_limits": { 00:18:29.377 "rw_ios_per_sec": 0, 00:18:29.377 "rw_mbytes_per_sec": 0, 00:18:29.377 "r_mbytes_per_sec": 0, 00:18:29.377 "w_mbytes_per_sec": 0 00:18:29.377 }, 00:18:29.377 "claimed": true, 00:18:29.377 "claim_type": "exclusive_write", 00:18:29.377 "zoned": false, 00:18:29.377 "supported_io_types": { 00:18:29.377 "read": true, 00:18:29.377 "write": true, 00:18:29.377 "unmap": true, 00:18:29.377 "flush": true, 00:18:29.377 "reset": true, 00:18:29.377 "nvme_admin": false, 00:18:29.377 "nvme_io": false, 00:18:29.377 "nvme_io_md": false, 00:18:29.377 "write_zeroes": true, 00:18:29.377 "zcopy": true, 00:18:29.377 "get_zone_info": false, 00:18:29.377 "zone_management": false, 00:18:29.377 "zone_append": false, 00:18:29.377 "compare": false, 00:18:29.377 "compare_and_write": false, 00:18:29.377 "abort": true, 00:18:29.377 "seek_hole": false, 00:18:29.377 "seek_data": false, 00:18:29.377 "copy": true, 00:18:29.377 "nvme_iov_md": false 00:18:29.377 }, 00:18:29.377 "memory_domains": [ 00:18:29.377 { 00:18:29.377 "dma_device_id": "system", 00:18:29.377 "dma_device_type": 1 00:18:29.377 }, 00:18:29.377 { 00:18:29.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.377 "dma_device_type": 2 00:18:29.377 } 00:18:29.377 ], 00:18:29.377 "driver_specific": {} 00:18:29.377 } 00:18:29.377 ] 00:18:29.377 13:18:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:29.377 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:29.377 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:29.377 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:29.377 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:29.377 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:29.378 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:29.378 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:29.378 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:29.378 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.378 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.378 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.378 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.378 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.378 13:18:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.637 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.637 "name": "Existed_Raid", 00:18:29.637 "uuid": "c58ca326-1614-40f1-b65d-bec341ba2a3e", 00:18:29.637 "strip_size_kb": 64, 00:18:29.637 "state": "online", 00:18:29.637 "raid_level": "raid0", 00:18:29.637 "superblock": true, 00:18:29.637 "num_base_bdevs": 4, 00:18:29.637 "num_base_bdevs_discovered": 4, 00:18:29.637 "num_base_bdevs_operational": 4, 00:18:29.637 "base_bdevs_list": [ 00:18:29.637 { 00:18:29.637 "name": "BaseBdev1", 00:18:29.637 "uuid": "bedee31b-21c0-4837-bbec-01ce33804a5a", 00:18:29.637 "is_configured": true, 00:18:29.637 "data_offset": 2048, 00:18:29.637 "data_size": 63488 00:18:29.637 }, 00:18:29.637 { 00:18:29.637 "name": "BaseBdev2", 00:18:29.637 "uuid": "fd6c4e44-554a-434a-ab4c-cb3344c50a94", 00:18:29.637 "is_configured": true, 00:18:29.637 "data_offset": 2048, 00:18:29.637 "data_size": 63488 00:18:29.637 }, 00:18:29.637 { 00:18:29.637 "name": "BaseBdev3", 00:18:29.637 "uuid": "e6b119c1-53d6-4b2b-9ebc-9cac2aa1f046", 00:18:29.637 "is_configured": true, 00:18:29.637 "data_offset": 2048, 00:18:29.637 "data_size": 63488 00:18:29.637 }, 00:18:29.637 { 00:18:29.637 "name": "BaseBdev4", 00:18:29.637 "uuid": "b1b1ca84-7080-4f5f-816a-fa8495f53cef", 00:18:29.637 "is_configured": true, 00:18:29.637 "data_offset": 2048, 00:18:29.637 "data_size": 63488 00:18:29.637 } 00:18:29.637 ] 00:18:29.637 }' 00:18:29.637 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.637 13:18:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.206 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:30.206 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:30.206 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:30.206 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:30.206 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:30.206 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:30.206 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:30.206 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:30.465 [2024-07-26 13:18:10.754617] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.465 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:30.465 "name": "Existed_Raid", 00:18:30.465 "aliases": [ 00:18:30.465 "c58ca326-1614-40f1-b65d-bec341ba2a3e" 00:18:30.465 ], 00:18:30.465 "product_name": "Raid Volume", 00:18:30.465 "block_size": 512, 00:18:30.465 "num_blocks": 253952, 00:18:30.465 "uuid": "c58ca326-1614-40f1-b65d-bec341ba2a3e", 00:18:30.465 "assigned_rate_limits": { 00:18:30.465 "rw_ios_per_sec": 0, 00:18:30.465 "rw_mbytes_per_sec": 0, 00:18:30.465 "r_mbytes_per_sec": 0, 00:18:30.465 "w_mbytes_per_sec": 0 00:18:30.465 }, 00:18:30.465 "claimed": false, 00:18:30.465 "zoned": false, 00:18:30.465 "supported_io_types": { 00:18:30.465 "read": true, 00:18:30.465 "write": true, 00:18:30.466 "unmap": true, 00:18:30.466 "flush": true, 00:18:30.466 "reset": true, 00:18:30.466 "nvme_admin": false, 00:18:30.466 "nvme_io": false, 00:18:30.466 "nvme_io_md": false, 00:18:30.466 "write_zeroes": true, 00:18:30.466 "zcopy": false, 00:18:30.466 "get_zone_info": false, 00:18:30.466 "zone_management": false, 00:18:30.466 "zone_append": false, 00:18:30.466 "compare": false, 00:18:30.466 "compare_and_write": false, 00:18:30.466 "abort": false, 00:18:30.466 "seek_hole": false, 00:18:30.466 "seek_data": false, 00:18:30.466 "copy": false, 00:18:30.466 "nvme_iov_md": false 00:18:30.466 }, 00:18:30.466 "memory_domains": [ 00:18:30.466 { 00:18:30.466 "dma_device_id": "system", 00:18:30.466 "dma_device_type": 1 00:18:30.466 }, 00:18:30.466 { 00:18:30.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.466 "dma_device_type": 2 00:18:30.466 }, 00:18:30.466 { 00:18:30.466 "dma_device_id": "system", 00:18:30.466 "dma_device_type": 1 00:18:30.466 }, 00:18:30.466 { 00:18:30.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.466 "dma_device_type": 2 00:18:30.466 }, 00:18:30.466 { 00:18:30.466 "dma_device_id": "system", 00:18:30.466 "dma_device_type": 1 00:18:30.466 }, 00:18:30.466 { 00:18:30.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.466 "dma_device_type": 2 00:18:30.466 }, 00:18:30.466 { 00:18:30.466 "dma_device_id": "system", 00:18:30.466 "dma_device_type": 1 00:18:30.466 }, 00:18:30.466 { 00:18:30.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.466 "dma_device_type": 2 00:18:30.466 } 00:18:30.466 ], 00:18:30.466 "driver_specific": { 00:18:30.466 "raid": { 00:18:30.466 "uuid": "c58ca326-1614-40f1-b65d-bec341ba2a3e", 00:18:30.466 "strip_size_kb": 64, 00:18:30.466 "state": "online", 00:18:30.466 "raid_level": "raid0", 00:18:30.466 "superblock": true, 00:18:30.466 "num_base_bdevs": 4, 00:18:30.466 "num_base_bdevs_discovered": 4, 00:18:30.466 "num_base_bdevs_operational": 4, 00:18:30.466 "base_bdevs_list": [ 00:18:30.466 { 00:18:30.466 "name": "BaseBdev1", 00:18:30.466 "uuid": "bedee31b-21c0-4837-bbec-01ce33804a5a", 00:18:30.466 "is_configured": true, 00:18:30.466 "data_offset": 2048, 00:18:30.466 "data_size": 63488 00:18:30.466 }, 00:18:30.466 { 00:18:30.466 "name": "BaseBdev2", 00:18:30.466 "uuid": "fd6c4e44-554a-434a-ab4c-cb3344c50a94", 00:18:30.466 "is_configured": true, 00:18:30.466 "data_offset": 2048, 00:18:30.466 "data_size": 63488 00:18:30.466 }, 00:18:30.466 { 00:18:30.466 "name": "BaseBdev3", 00:18:30.466 "uuid": "e6b119c1-53d6-4b2b-9ebc-9cac2aa1f046", 00:18:30.466 "is_configured": true, 00:18:30.466 "data_offset": 2048, 00:18:30.466 "data_size": 63488 00:18:30.466 }, 00:18:30.466 { 00:18:30.466 "name": "BaseBdev4", 00:18:30.466 "uuid": "b1b1ca84-7080-4f5f-816a-fa8495f53cef", 00:18:30.466 "is_configured": true, 00:18:30.466 "data_offset": 2048, 00:18:30.466 "data_size": 63488 00:18:30.466 } 00:18:30.466 ] 00:18:30.466 } 00:18:30.466 } 00:18:30.466 }' 00:18:30.466 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:30.466 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:30.466 BaseBdev2 00:18:30.466 BaseBdev3 00:18:30.466 BaseBdev4' 00:18:30.466 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:30.466 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:30.466 13:18:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:30.725 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:30.725 "name": "BaseBdev1", 00:18:30.725 "aliases": [ 00:18:30.725 "bedee31b-21c0-4837-bbec-01ce33804a5a" 00:18:30.725 ], 00:18:30.725 "product_name": "Malloc disk", 00:18:30.725 "block_size": 512, 00:18:30.725 "num_blocks": 65536, 00:18:30.725 "uuid": "bedee31b-21c0-4837-bbec-01ce33804a5a", 00:18:30.725 "assigned_rate_limits": { 00:18:30.725 "rw_ios_per_sec": 0, 00:18:30.725 "rw_mbytes_per_sec": 0, 00:18:30.725 "r_mbytes_per_sec": 0, 00:18:30.725 "w_mbytes_per_sec": 0 00:18:30.725 }, 00:18:30.725 "claimed": true, 00:18:30.725 "claim_type": "exclusive_write", 00:18:30.725 "zoned": false, 00:18:30.725 "supported_io_types": { 00:18:30.725 "read": true, 00:18:30.725 "write": true, 00:18:30.725 "unmap": true, 00:18:30.725 "flush": true, 00:18:30.725 "reset": true, 00:18:30.725 "nvme_admin": false, 00:18:30.725 "nvme_io": false, 00:18:30.725 "nvme_io_md": false, 00:18:30.725 "write_zeroes": true, 00:18:30.725 "zcopy": true, 00:18:30.725 "get_zone_info": false, 00:18:30.725 "zone_management": false, 00:18:30.725 "zone_append": false, 00:18:30.725 "compare": false, 00:18:30.725 "compare_and_write": false, 00:18:30.725 "abort": true, 00:18:30.725 "seek_hole": false, 00:18:30.725 "seek_data": false, 00:18:30.725 "copy": true, 00:18:30.725 "nvme_iov_md": false 00:18:30.725 }, 00:18:30.725 "memory_domains": [ 00:18:30.725 { 00:18:30.725 "dma_device_id": "system", 00:18:30.725 "dma_device_type": 1 00:18:30.725 }, 00:18:30.725 { 00:18:30.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.725 "dma_device_type": 2 00:18:30.725 } 00:18:30.725 ], 00:18:30.725 "driver_specific": {} 00:18:30.725 }' 00:18:30.725 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:30.725 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:30.725 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:30.725 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:30.725 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:30.725 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:30.725 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:30.985 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:30.985 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:30.985 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:30.985 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:30.985 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:30.985 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:30.985 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:30.985 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:31.244 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:31.244 "name": "BaseBdev2", 00:18:31.244 "aliases": [ 00:18:31.244 "fd6c4e44-554a-434a-ab4c-cb3344c50a94" 00:18:31.244 ], 00:18:31.244 "product_name": "Malloc disk", 00:18:31.244 "block_size": 512, 00:18:31.244 "num_blocks": 65536, 00:18:31.244 "uuid": "fd6c4e44-554a-434a-ab4c-cb3344c50a94", 00:18:31.244 "assigned_rate_limits": { 00:18:31.244 "rw_ios_per_sec": 0, 00:18:31.244 "rw_mbytes_per_sec": 0, 00:18:31.244 "r_mbytes_per_sec": 0, 00:18:31.244 "w_mbytes_per_sec": 0 00:18:31.244 }, 00:18:31.244 "claimed": true, 00:18:31.244 "claim_type": "exclusive_write", 00:18:31.244 "zoned": false, 00:18:31.244 "supported_io_types": { 00:18:31.244 "read": true, 00:18:31.244 "write": true, 00:18:31.244 "unmap": true, 00:18:31.244 "flush": true, 00:18:31.244 "reset": true, 00:18:31.244 "nvme_admin": false, 00:18:31.244 "nvme_io": false, 00:18:31.244 "nvme_io_md": false, 00:18:31.244 "write_zeroes": true, 00:18:31.244 "zcopy": true, 00:18:31.244 "get_zone_info": false, 00:18:31.244 "zone_management": false, 00:18:31.244 "zone_append": false, 00:18:31.244 "compare": false, 00:18:31.244 "compare_and_write": false, 00:18:31.244 "abort": true, 00:18:31.244 "seek_hole": false, 00:18:31.244 "seek_data": false, 00:18:31.244 "copy": true, 00:18:31.244 "nvme_iov_md": false 00:18:31.244 }, 00:18:31.244 "memory_domains": [ 00:18:31.244 { 00:18:31.244 "dma_device_id": "system", 00:18:31.244 "dma_device_type": 1 00:18:31.244 }, 00:18:31.244 { 00:18:31.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.244 "dma_device_type": 2 00:18:31.244 } 00:18:31.244 ], 00:18:31.244 "driver_specific": {} 00:18:31.244 }' 00:18:31.244 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.244 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.244 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:31.244 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.244 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.504 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:31.504 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.504 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.504 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:31.504 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.504 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.504 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:31.504 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:31.504 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:31.504 13:18:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:31.763 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:31.763 "name": "BaseBdev3", 00:18:31.763 "aliases": [ 00:18:31.763 "e6b119c1-53d6-4b2b-9ebc-9cac2aa1f046" 00:18:31.763 ], 00:18:31.763 "product_name": "Malloc disk", 00:18:31.763 "block_size": 512, 00:18:31.763 "num_blocks": 65536, 00:18:31.763 "uuid": "e6b119c1-53d6-4b2b-9ebc-9cac2aa1f046", 00:18:31.763 "assigned_rate_limits": { 00:18:31.763 "rw_ios_per_sec": 0, 00:18:31.763 "rw_mbytes_per_sec": 0, 00:18:31.763 "r_mbytes_per_sec": 0, 00:18:31.763 "w_mbytes_per_sec": 0 00:18:31.763 }, 00:18:31.763 "claimed": true, 00:18:31.763 "claim_type": "exclusive_write", 00:18:31.763 "zoned": false, 00:18:31.763 "supported_io_types": { 00:18:31.763 "read": true, 00:18:31.763 "write": true, 00:18:31.763 "unmap": true, 00:18:31.763 "flush": true, 00:18:31.763 "reset": true, 00:18:31.763 "nvme_admin": false, 00:18:31.763 "nvme_io": false, 00:18:31.763 "nvme_io_md": false, 00:18:31.763 "write_zeroes": true, 00:18:31.763 "zcopy": true, 00:18:31.763 "get_zone_info": false, 00:18:31.763 "zone_management": false, 00:18:31.763 "zone_append": false, 00:18:31.763 "compare": false, 00:18:31.763 "compare_and_write": false, 00:18:31.763 "abort": true, 00:18:31.763 "seek_hole": false, 00:18:31.763 "seek_data": false, 00:18:31.763 "copy": true, 00:18:31.763 "nvme_iov_md": false 00:18:31.763 }, 00:18:31.763 "memory_domains": [ 00:18:31.763 { 00:18:31.763 "dma_device_id": "system", 00:18:31.763 "dma_device_type": 1 00:18:31.763 }, 00:18:31.763 { 00:18:31.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.763 "dma_device_type": 2 00:18:31.763 } 00:18:31.763 ], 00:18:31.763 "driver_specific": {} 00:18:31.763 }' 00:18:31.763 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.763 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.763 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:31.763 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.022 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.022 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:32.022 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.022 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.022 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:32.022 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.022 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.022 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:32.022 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:32.022 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:32.022 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:32.281 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:32.281 "name": "BaseBdev4", 00:18:32.281 "aliases": [ 00:18:32.281 "b1b1ca84-7080-4f5f-816a-fa8495f53cef" 00:18:32.281 ], 00:18:32.281 "product_name": "Malloc disk", 00:18:32.281 "block_size": 512, 00:18:32.281 "num_blocks": 65536, 00:18:32.281 "uuid": "b1b1ca84-7080-4f5f-816a-fa8495f53cef", 00:18:32.281 "assigned_rate_limits": { 00:18:32.281 "rw_ios_per_sec": 0, 00:18:32.281 "rw_mbytes_per_sec": 0, 00:18:32.281 "r_mbytes_per_sec": 0, 00:18:32.281 "w_mbytes_per_sec": 0 00:18:32.281 }, 00:18:32.281 "claimed": true, 00:18:32.281 "claim_type": "exclusive_write", 00:18:32.281 "zoned": false, 00:18:32.281 "supported_io_types": { 00:18:32.281 "read": true, 00:18:32.281 "write": true, 00:18:32.281 "unmap": true, 00:18:32.281 "flush": true, 00:18:32.281 "reset": true, 00:18:32.281 "nvme_admin": false, 00:18:32.281 "nvme_io": false, 00:18:32.281 "nvme_io_md": false, 00:18:32.281 "write_zeroes": true, 00:18:32.281 "zcopy": true, 00:18:32.281 "get_zone_info": false, 00:18:32.281 "zone_management": false, 00:18:32.281 "zone_append": false, 00:18:32.281 "compare": false, 00:18:32.281 "compare_and_write": false, 00:18:32.281 "abort": true, 00:18:32.281 "seek_hole": false, 00:18:32.281 "seek_data": false, 00:18:32.281 "copy": true, 00:18:32.281 "nvme_iov_md": false 00:18:32.281 }, 00:18:32.281 "memory_domains": [ 00:18:32.281 { 00:18:32.281 "dma_device_id": "system", 00:18:32.281 "dma_device_type": 1 00:18:32.281 }, 00:18:32.281 { 00:18:32.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.281 "dma_device_type": 2 00:18:32.281 } 00:18:32.281 ], 00:18:32.281 "driver_specific": {} 00:18:32.281 }' 00:18:32.281 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.281 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.540 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:32.540 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.540 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.540 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:32.540 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.540 13:18:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.540 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:32.540 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.799 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.799 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:32.799 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:32.799 [2024-07-26 13:18:13.317254] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.799 [2024-07-26 13:18:13.317278] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:32.799 [2024-07-26 13:18:13.317321] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.058 "name": "Existed_Raid", 00:18:33.058 "uuid": "c58ca326-1614-40f1-b65d-bec341ba2a3e", 00:18:33.058 "strip_size_kb": 64, 00:18:33.058 "state": "offline", 00:18:33.058 "raid_level": "raid0", 00:18:33.058 "superblock": true, 00:18:33.058 "num_base_bdevs": 4, 00:18:33.058 "num_base_bdevs_discovered": 3, 00:18:33.058 "num_base_bdevs_operational": 3, 00:18:33.058 "base_bdevs_list": [ 00:18:33.058 { 00:18:33.058 "name": null, 00:18:33.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.058 "is_configured": false, 00:18:33.058 "data_offset": 2048, 00:18:33.058 "data_size": 63488 00:18:33.058 }, 00:18:33.058 { 00:18:33.058 "name": "BaseBdev2", 00:18:33.058 "uuid": "fd6c4e44-554a-434a-ab4c-cb3344c50a94", 00:18:33.058 "is_configured": true, 00:18:33.058 "data_offset": 2048, 00:18:33.058 "data_size": 63488 00:18:33.058 }, 00:18:33.058 { 00:18:33.058 "name": "BaseBdev3", 00:18:33.058 "uuid": "e6b119c1-53d6-4b2b-9ebc-9cac2aa1f046", 00:18:33.058 "is_configured": true, 00:18:33.058 "data_offset": 2048, 00:18:33.058 "data_size": 63488 00:18:33.058 }, 00:18:33.058 { 00:18:33.058 "name": "BaseBdev4", 00:18:33.058 "uuid": "b1b1ca84-7080-4f5f-816a-fa8495f53cef", 00:18:33.058 "is_configured": true, 00:18:33.058 "data_offset": 2048, 00:18:33.058 "data_size": 63488 00:18:33.058 } 00:18:33.058 ] 00:18:33.058 }' 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.058 13:18:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.625 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:33.625 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:33.625 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.625 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:33.884 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:33.884 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:33.884 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:34.143 [2024-07-26 13:18:14.577556] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:34.143 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:34.143 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:34.143 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.143 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:34.402 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:34.402 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:34.402 13:18:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:34.662 [2024-07-26 13:18:15.037108] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:34.662 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:34.662 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:34.662 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:34.662 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.921 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:34.921 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:34.921 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:35.180 [2024-07-26 13:18:15.500281] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:35.180 [2024-07-26 13:18:15.500316] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15ec840 name Existed_Raid, state offline 00:18:35.180 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:35.180 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:35.180 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.180 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:35.438 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:35.438 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:35.438 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:18:35.438 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:35.438 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:35.438 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:35.697 BaseBdev2 00:18:35.697 13:18:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:35.697 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:35.697 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:35.697 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:35.697 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:35.697 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:35.697 13:18:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:35.697 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:35.957 [ 00:18:35.957 { 00:18:35.957 "name": "BaseBdev2", 00:18:35.957 "aliases": [ 00:18:35.957 "4681cf95-ddf5-47cc-828c-9be33af555d2" 00:18:35.957 ], 00:18:35.957 "product_name": "Malloc disk", 00:18:35.957 "block_size": 512, 00:18:35.957 "num_blocks": 65536, 00:18:35.957 "uuid": "4681cf95-ddf5-47cc-828c-9be33af555d2", 00:18:35.957 "assigned_rate_limits": { 00:18:35.957 "rw_ios_per_sec": 0, 00:18:35.957 "rw_mbytes_per_sec": 0, 00:18:35.957 "r_mbytes_per_sec": 0, 00:18:35.957 "w_mbytes_per_sec": 0 00:18:35.957 }, 00:18:35.957 "claimed": false, 00:18:35.957 "zoned": false, 00:18:35.957 "supported_io_types": { 00:18:35.957 "read": true, 00:18:35.957 "write": true, 00:18:35.957 "unmap": true, 00:18:35.957 "flush": true, 00:18:35.957 "reset": true, 00:18:35.957 "nvme_admin": false, 00:18:35.957 "nvme_io": false, 00:18:35.957 "nvme_io_md": false, 00:18:35.957 "write_zeroes": true, 00:18:35.957 "zcopy": true, 00:18:35.957 "get_zone_info": false, 00:18:35.957 "zone_management": false, 00:18:35.957 "zone_append": false, 00:18:35.957 "compare": false, 00:18:35.957 "compare_and_write": false, 00:18:35.957 "abort": true, 00:18:35.957 "seek_hole": false, 00:18:35.957 "seek_data": false, 00:18:35.957 "copy": true, 00:18:35.957 "nvme_iov_md": false 00:18:35.957 }, 00:18:35.957 "memory_domains": [ 00:18:35.957 { 00:18:35.957 "dma_device_id": "system", 00:18:35.957 "dma_device_type": 1 00:18:35.957 }, 00:18:35.957 { 00:18:35.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:35.957 "dma_device_type": 2 00:18:35.957 } 00:18:35.957 ], 00:18:35.957 "driver_specific": {} 00:18:35.957 } 00:18:35.957 ] 00:18:35.957 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:35.957 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:35.957 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:35.957 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:36.216 BaseBdev3 00:18:36.216 13:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:36.216 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:36.216 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:36.216 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:36.216 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:36.216 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:36.216 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.474 13:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:36.733 [ 00:18:36.733 { 00:18:36.733 "name": "BaseBdev3", 00:18:36.733 "aliases": [ 00:18:36.733 "6dd3c5c5-8f97-432b-8eb8-50f61508756b" 00:18:36.733 ], 00:18:36.733 "product_name": "Malloc disk", 00:18:36.733 "block_size": 512, 00:18:36.733 "num_blocks": 65536, 00:18:36.733 "uuid": "6dd3c5c5-8f97-432b-8eb8-50f61508756b", 00:18:36.733 "assigned_rate_limits": { 00:18:36.733 "rw_ios_per_sec": 0, 00:18:36.733 "rw_mbytes_per_sec": 0, 00:18:36.733 "r_mbytes_per_sec": 0, 00:18:36.733 "w_mbytes_per_sec": 0 00:18:36.733 }, 00:18:36.733 "claimed": false, 00:18:36.733 "zoned": false, 00:18:36.733 "supported_io_types": { 00:18:36.733 "read": true, 00:18:36.733 "write": true, 00:18:36.733 "unmap": true, 00:18:36.733 "flush": true, 00:18:36.733 "reset": true, 00:18:36.733 "nvme_admin": false, 00:18:36.733 "nvme_io": false, 00:18:36.733 "nvme_io_md": false, 00:18:36.733 "write_zeroes": true, 00:18:36.733 "zcopy": true, 00:18:36.733 "get_zone_info": false, 00:18:36.733 "zone_management": false, 00:18:36.733 "zone_append": false, 00:18:36.733 "compare": false, 00:18:36.733 "compare_and_write": false, 00:18:36.733 "abort": true, 00:18:36.733 "seek_hole": false, 00:18:36.734 "seek_data": false, 00:18:36.734 "copy": true, 00:18:36.734 "nvme_iov_md": false 00:18:36.734 }, 00:18:36.734 "memory_domains": [ 00:18:36.734 { 00:18:36.734 "dma_device_id": "system", 00:18:36.734 "dma_device_type": 1 00:18:36.734 }, 00:18:36.734 { 00:18:36.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.734 "dma_device_type": 2 00:18:36.734 } 00:18:36.734 ], 00:18:36.734 "driver_specific": {} 00:18:36.734 } 00:18:36.734 ] 00:18:36.734 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:36.734 13:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:36.734 13:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:36.734 13:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:36.993 BaseBdev4 00:18:36.993 13:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:18:36.993 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:18:36.993 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:36.993 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:36.993 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:36.993 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:36.993 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:37.252 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:37.252 [ 00:18:37.252 { 00:18:37.252 "name": "BaseBdev4", 00:18:37.252 "aliases": [ 00:18:37.252 "08511a86-a784-4d9c-99e9-497bdcb2eac5" 00:18:37.252 ], 00:18:37.252 "product_name": "Malloc disk", 00:18:37.252 "block_size": 512, 00:18:37.252 "num_blocks": 65536, 00:18:37.252 "uuid": "08511a86-a784-4d9c-99e9-497bdcb2eac5", 00:18:37.252 "assigned_rate_limits": { 00:18:37.252 "rw_ios_per_sec": 0, 00:18:37.252 "rw_mbytes_per_sec": 0, 00:18:37.252 "r_mbytes_per_sec": 0, 00:18:37.252 "w_mbytes_per_sec": 0 00:18:37.252 }, 00:18:37.252 "claimed": false, 00:18:37.253 "zoned": false, 00:18:37.253 "supported_io_types": { 00:18:37.253 "read": true, 00:18:37.253 "write": true, 00:18:37.253 "unmap": true, 00:18:37.253 "flush": true, 00:18:37.253 "reset": true, 00:18:37.253 "nvme_admin": false, 00:18:37.253 "nvme_io": false, 00:18:37.253 "nvme_io_md": false, 00:18:37.253 "write_zeroes": true, 00:18:37.253 "zcopy": true, 00:18:37.253 "get_zone_info": false, 00:18:37.253 "zone_management": false, 00:18:37.253 "zone_append": false, 00:18:37.253 "compare": false, 00:18:37.253 "compare_and_write": false, 00:18:37.253 "abort": true, 00:18:37.253 "seek_hole": false, 00:18:37.253 "seek_data": false, 00:18:37.253 "copy": true, 00:18:37.253 "nvme_iov_md": false 00:18:37.253 }, 00:18:37.253 "memory_domains": [ 00:18:37.253 { 00:18:37.253 "dma_device_id": "system", 00:18:37.253 "dma_device_type": 1 00:18:37.253 }, 00:18:37.253 { 00:18:37.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.253 "dma_device_type": 2 00:18:37.253 } 00:18:37.253 ], 00:18:37.253 "driver_specific": {} 00:18:37.253 } 00:18:37.253 ] 00:18:37.512 13:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:37.512 13:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:37.512 13:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:37.512 13:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:37.512 [2024-07-26 13:18:17.989870] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:37.512 [2024-07-26 13:18:17.989909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:37.512 [2024-07-26 13:18:17.989928] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.512 [2024-07-26 13:18:17.991181] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:37.512 [2024-07-26 13:18:17.991221] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:37.512 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:37.512 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:37.512 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:37.512 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:37.512 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:37.512 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:37.512 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.512 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.512 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.512 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.512 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.512 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.772 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.772 "name": "Existed_Raid", 00:18:37.772 "uuid": "a72c5e75-0b16-4515-a0f7-80350bdc7c93", 00:18:37.772 "strip_size_kb": 64, 00:18:37.772 "state": "configuring", 00:18:37.772 "raid_level": "raid0", 00:18:37.772 "superblock": true, 00:18:37.772 "num_base_bdevs": 4, 00:18:37.772 "num_base_bdevs_discovered": 3, 00:18:37.772 "num_base_bdevs_operational": 4, 00:18:37.772 "base_bdevs_list": [ 00:18:37.772 { 00:18:37.772 "name": "BaseBdev1", 00:18:37.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.772 "is_configured": false, 00:18:37.772 "data_offset": 0, 00:18:37.772 "data_size": 0 00:18:37.772 }, 00:18:37.772 { 00:18:37.772 "name": "BaseBdev2", 00:18:37.772 "uuid": "4681cf95-ddf5-47cc-828c-9be33af555d2", 00:18:37.772 "is_configured": true, 00:18:37.772 "data_offset": 2048, 00:18:37.772 "data_size": 63488 00:18:37.772 }, 00:18:37.772 { 00:18:37.772 "name": "BaseBdev3", 00:18:37.772 "uuid": "6dd3c5c5-8f97-432b-8eb8-50f61508756b", 00:18:37.772 "is_configured": true, 00:18:37.772 "data_offset": 2048, 00:18:37.772 "data_size": 63488 00:18:37.772 }, 00:18:37.772 { 00:18:37.772 "name": "BaseBdev4", 00:18:37.772 "uuid": "08511a86-a784-4d9c-99e9-497bdcb2eac5", 00:18:37.772 "is_configured": true, 00:18:37.772 "data_offset": 2048, 00:18:37.772 "data_size": 63488 00:18:37.772 } 00:18:37.772 ] 00:18:37.772 }' 00:18:37.772 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.772 13:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.340 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:38.599 [2024-07-26 13:18:18.944349] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:38.599 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:38.599 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:38.599 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:38.599 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:38.599 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:38.599 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:38.599 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:38.599 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:38.599 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:38.599 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:38.599 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.599 13:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.857 13:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.857 "name": "Existed_Raid", 00:18:38.857 "uuid": "a72c5e75-0b16-4515-a0f7-80350bdc7c93", 00:18:38.857 "strip_size_kb": 64, 00:18:38.857 "state": "configuring", 00:18:38.857 "raid_level": "raid0", 00:18:38.857 "superblock": true, 00:18:38.857 "num_base_bdevs": 4, 00:18:38.857 "num_base_bdevs_discovered": 2, 00:18:38.857 "num_base_bdevs_operational": 4, 00:18:38.857 "base_bdevs_list": [ 00:18:38.857 { 00:18:38.857 "name": "BaseBdev1", 00:18:38.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.857 "is_configured": false, 00:18:38.857 "data_offset": 0, 00:18:38.857 "data_size": 0 00:18:38.857 }, 00:18:38.857 { 00:18:38.857 "name": null, 00:18:38.857 "uuid": "4681cf95-ddf5-47cc-828c-9be33af555d2", 00:18:38.857 "is_configured": false, 00:18:38.857 "data_offset": 2048, 00:18:38.857 "data_size": 63488 00:18:38.857 }, 00:18:38.857 { 00:18:38.857 "name": "BaseBdev3", 00:18:38.857 "uuid": "6dd3c5c5-8f97-432b-8eb8-50f61508756b", 00:18:38.857 "is_configured": true, 00:18:38.857 "data_offset": 2048, 00:18:38.857 "data_size": 63488 00:18:38.857 }, 00:18:38.857 { 00:18:38.857 "name": "BaseBdev4", 00:18:38.857 "uuid": "08511a86-a784-4d9c-99e9-497bdcb2eac5", 00:18:38.857 "is_configured": true, 00:18:38.857 "data_offset": 2048, 00:18:38.857 "data_size": 63488 00:18:38.857 } 00:18:38.857 ] 00:18:38.857 }' 00:18:38.857 13:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.857 13:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.425 13:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.425 13:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:39.425 13:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:39.425 13:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:39.684 [2024-07-26 13:18:20.154748] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.684 BaseBdev1 00:18:39.684 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:39.684 13:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:39.684 13:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:39.684 13:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:39.684 13:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:39.684 13:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:39.684 13:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:39.942 13:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:40.201 [ 00:18:40.201 { 00:18:40.201 "name": "BaseBdev1", 00:18:40.201 "aliases": [ 00:18:40.201 "803cee8e-592e-4e40-8881-e367fb893ba0" 00:18:40.201 ], 00:18:40.201 "product_name": "Malloc disk", 00:18:40.201 "block_size": 512, 00:18:40.201 "num_blocks": 65536, 00:18:40.201 "uuid": "803cee8e-592e-4e40-8881-e367fb893ba0", 00:18:40.201 "assigned_rate_limits": { 00:18:40.201 "rw_ios_per_sec": 0, 00:18:40.201 "rw_mbytes_per_sec": 0, 00:18:40.201 "r_mbytes_per_sec": 0, 00:18:40.201 "w_mbytes_per_sec": 0 00:18:40.201 }, 00:18:40.201 "claimed": true, 00:18:40.201 "claim_type": "exclusive_write", 00:18:40.201 "zoned": false, 00:18:40.201 "supported_io_types": { 00:18:40.201 "read": true, 00:18:40.201 "write": true, 00:18:40.201 "unmap": true, 00:18:40.202 "flush": true, 00:18:40.202 "reset": true, 00:18:40.202 "nvme_admin": false, 00:18:40.202 "nvme_io": false, 00:18:40.202 "nvme_io_md": false, 00:18:40.202 "write_zeroes": true, 00:18:40.202 "zcopy": true, 00:18:40.202 "get_zone_info": false, 00:18:40.202 "zone_management": false, 00:18:40.202 "zone_append": false, 00:18:40.202 "compare": false, 00:18:40.202 "compare_and_write": false, 00:18:40.202 "abort": true, 00:18:40.202 "seek_hole": false, 00:18:40.202 "seek_data": false, 00:18:40.202 "copy": true, 00:18:40.202 "nvme_iov_md": false 00:18:40.202 }, 00:18:40.202 "memory_domains": [ 00:18:40.202 { 00:18:40.202 "dma_device_id": "system", 00:18:40.202 "dma_device_type": 1 00:18:40.202 }, 00:18:40.202 { 00:18:40.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.202 "dma_device_type": 2 00:18:40.202 } 00:18:40.202 ], 00:18:40.202 "driver_specific": {} 00:18:40.202 } 00:18:40.202 ] 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.202 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.461 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.461 "name": "Existed_Raid", 00:18:40.461 "uuid": "a72c5e75-0b16-4515-a0f7-80350bdc7c93", 00:18:40.461 "strip_size_kb": 64, 00:18:40.461 "state": "configuring", 00:18:40.461 "raid_level": "raid0", 00:18:40.461 "superblock": true, 00:18:40.461 "num_base_bdevs": 4, 00:18:40.461 "num_base_bdevs_discovered": 3, 00:18:40.461 "num_base_bdevs_operational": 4, 00:18:40.461 "base_bdevs_list": [ 00:18:40.461 { 00:18:40.461 "name": "BaseBdev1", 00:18:40.461 "uuid": "803cee8e-592e-4e40-8881-e367fb893ba0", 00:18:40.461 "is_configured": true, 00:18:40.461 "data_offset": 2048, 00:18:40.461 "data_size": 63488 00:18:40.461 }, 00:18:40.461 { 00:18:40.461 "name": null, 00:18:40.461 "uuid": "4681cf95-ddf5-47cc-828c-9be33af555d2", 00:18:40.461 "is_configured": false, 00:18:40.461 "data_offset": 2048, 00:18:40.461 "data_size": 63488 00:18:40.461 }, 00:18:40.461 { 00:18:40.461 "name": "BaseBdev3", 00:18:40.461 "uuid": "6dd3c5c5-8f97-432b-8eb8-50f61508756b", 00:18:40.461 "is_configured": true, 00:18:40.461 "data_offset": 2048, 00:18:40.461 "data_size": 63488 00:18:40.461 }, 00:18:40.461 { 00:18:40.461 "name": "BaseBdev4", 00:18:40.461 "uuid": "08511a86-a784-4d9c-99e9-497bdcb2eac5", 00:18:40.461 "is_configured": true, 00:18:40.461 "data_offset": 2048, 00:18:40.461 "data_size": 63488 00:18:40.461 } 00:18:40.461 ] 00:18:40.461 }' 00:18:40.461 13:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.461 13:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.029 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.029 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:41.289 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:41.289 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:41.548 [2024-07-26 13:18:21.843364] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:41.548 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:41.548 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:41.548 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:41.548 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:41.548 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:41.548 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:41.548 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.548 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.548 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.548 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.548 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.548 13:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.807 13:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.807 "name": "Existed_Raid", 00:18:41.807 "uuid": "a72c5e75-0b16-4515-a0f7-80350bdc7c93", 00:18:41.807 "strip_size_kb": 64, 00:18:41.807 "state": "configuring", 00:18:41.807 "raid_level": "raid0", 00:18:41.807 "superblock": true, 00:18:41.807 "num_base_bdevs": 4, 00:18:41.807 "num_base_bdevs_discovered": 2, 00:18:41.807 "num_base_bdevs_operational": 4, 00:18:41.807 "base_bdevs_list": [ 00:18:41.807 { 00:18:41.807 "name": "BaseBdev1", 00:18:41.807 "uuid": "803cee8e-592e-4e40-8881-e367fb893ba0", 00:18:41.807 "is_configured": true, 00:18:41.807 "data_offset": 2048, 00:18:41.807 "data_size": 63488 00:18:41.807 }, 00:18:41.807 { 00:18:41.807 "name": null, 00:18:41.807 "uuid": "4681cf95-ddf5-47cc-828c-9be33af555d2", 00:18:41.807 "is_configured": false, 00:18:41.807 "data_offset": 2048, 00:18:41.807 "data_size": 63488 00:18:41.807 }, 00:18:41.807 { 00:18:41.807 "name": null, 00:18:41.807 "uuid": "6dd3c5c5-8f97-432b-8eb8-50f61508756b", 00:18:41.807 "is_configured": false, 00:18:41.807 "data_offset": 2048, 00:18:41.807 "data_size": 63488 00:18:41.807 }, 00:18:41.807 { 00:18:41.807 "name": "BaseBdev4", 00:18:41.807 "uuid": "08511a86-a784-4d9c-99e9-497bdcb2eac5", 00:18:41.807 "is_configured": true, 00:18:41.807 "data_offset": 2048, 00:18:41.807 "data_size": 63488 00:18:41.807 } 00:18:41.807 ] 00:18:41.807 }' 00:18:41.807 13:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.807 13:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.374 13:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.374 13:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:42.374 13:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:42.374 13:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:42.634 [2024-07-26 13:18:23.006549] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:42.634 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:42.634 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:42.634 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:42.634 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:42.634 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:42.634 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:42.634 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.634 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.634 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.634 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.634 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.634 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.893 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.893 "name": "Existed_Raid", 00:18:42.893 "uuid": "a72c5e75-0b16-4515-a0f7-80350bdc7c93", 00:18:42.893 "strip_size_kb": 64, 00:18:42.893 "state": "configuring", 00:18:42.893 "raid_level": "raid0", 00:18:42.893 "superblock": true, 00:18:42.893 "num_base_bdevs": 4, 00:18:42.893 "num_base_bdevs_discovered": 3, 00:18:42.893 "num_base_bdevs_operational": 4, 00:18:42.893 "base_bdevs_list": [ 00:18:42.893 { 00:18:42.893 "name": "BaseBdev1", 00:18:42.893 "uuid": "803cee8e-592e-4e40-8881-e367fb893ba0", 00:18:42.893 "is_configured": true, 00:18:42.893 "data_offset": 2048, 00:18:42.893 "data_size": 63488 00:18:42.893 }, 00:18:42.893 { 00:18:42.893 "name": null, 00:18:42.893 "uuid": "4681cf95-ddf5-47cc-828c-9be33af555d2", 00:18:42.893 "is_configured": false, 00:18:42.893 "data_offset": 2048, 00:18:42.893 "data_size": 63488 00:18:42.893 }, 00:18:42.893 { 00:18:42.893 "name": "BaseBdev3", 00:18:42.893 "uuid": "6dd3c5c5-8f97-432b-8eb8-50f61508756b", 00:18:42.893 "is_configured": true, 00:18:42.893 "data_offset": 2048, 00:18:42.893 "data_size": 63488 00:18:42.893 }, 00:18:42.893 { 00:18:42.893 "name": "BaseBdev4", 00:18:42.893 "uuid": "08511a86-a784-4d9c-99e9-497bdcb2eac5", 00:18:42.893 "is_configured": true, 00:18:42.893 "data_offset": 2048, 00:18:42.893 "data_size": 63488 00:18:42.893 } 00:18:42.893 ] 00:18:42.893 }' 00:18:42.893 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.893 13:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.463 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.463 13:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:43.722 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:43.722 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:43.981 [2024-07-26 13:18:24.257873] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:43.981 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:43.981 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.981 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:43.981 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:43.981 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.981 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:43.981 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.981 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.981 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.981 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.981 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.981 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.241 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.241 "name": "Existed_Raid", 00:18:44.241 "uuid": "a72c5e75-0b16-4515-a0f7-80350bdc7c93", 00:18:44.241 "strip_size_kb": 64, 00:18:44.241 "state": "configuring", 00:18:44.241 "raid_level": "raid0", 00:18:44.241 "superblock": true, 00:18:44.241 "num_base_bdevs": 4, 00:18:44.241 "num_base_bdevs_discovered": 2, 00:18:44.241 "num_base_bdevs_operational": 4, 00:18:44.241 "base_bdevs_list": [ 00:18:44.241 { 00:18:44.241 "name": null, 00:18:44.241 "uuid": "803cee8e-592e-4e40-8881-e367fb893ba0", 00:18:44.241 "is_configured": false, 00:18:44.241 "data_offset": 2048, 00:18:44.241 "data_size": 63488 00:18:44.241 }, 00:18:44.241 { 00:18:44.241 "name": null, 00:18:44.241 "uuid": "4681cf95-ddf5-47cc-828c-9be33af555d2", 00:18:44.241 "is_configured": false, 00:18:44.241 "data_offset": 2048, 00:18:44.241 "data_size": 63488 00:18:44.241 }, 00:18:44.241 { 00:18:44.241 "name": "BaseBdev3", 00:18:44.241 "uuid": "6dd3c5c5-8f97-432b-8eb8-50f61508756b", 00:18:44.241 "is_configured": true, 00:18:44.241 "data_offset": 2048, 00:18:44.241 "data_size": 63488 00:18:44.241 }, 00:18:44.241 { 00:18:44.241 "name": "BaseBdev4", 00:18:44.241 "uuid": "08511a86-a784-4d9c-99e9-497bdcb2eac5", 00:18:44.241 "is_configured": true, 00:18:44.241 "data_offset": 2048, 00:18:44.241 "data_size": 63488 00:18:44.241 } 00:18:44.241 ] 00:18:44.241 }' 00:18:44.241 13:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.241 13:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.809 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.809 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:44.809 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:44.809 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:45.069 [2024-07-26 13:18:25.495051] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.069 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:45.069 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:45.069 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:45.069 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:45.069 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:45.069 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:45.069 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.069 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.069 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.069 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.069 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.069 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.328 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.328 "name": "Existed_Raid", 00:18:45.328 "uuid": "a72c5e75-0b16-4515-a0f7-80350bdc7c93", 00:18:45.328 "strip_size_kb": 64, 00:18:45.328 "state": "configuring", 00:18:45.328 "raid_level": "raid0", 00:18:45.328 "superblock": true, 00:18:45.328 "num_base_bdevs": 4, 00:18:45.328 "num_base_bdevs_discovered": 3, 00:18:45.328 "num_base_bdevs_operational": 4, 00:18:45.328 "base_bdevs_list": [ 00:18:45.328 { 00:18:45.328 "name": null, 00:18:45.328 "uuid": "803cee8e-592e-4e40-8881-e367fb893ba0", 00:18:45.328 "is_configured": false, 00:18:45.328 "data_offset": 2048, 00:18:45.328 "data_size": 63488 00:18:45.328 }, 00:18:45.328 { 00:18:45.328 "name": "BaseBdev2", 00:18:45.328 "uuid": "4681cf95-ddf5-47cc-828c-9be33af555d2", 00:18:45.328 "is_configured": true, 00:18:45.328 "data_offset": 2048, 00:18:45.328 "data_size": 63488 00:18:45.328 }, 00:18:45.328 { 00:18:45.328 "name": "BaseBdev3", 00:18:45.328 "uuid": "6dd3c5c5-8f97-432b-8eb8-50f61508756b", 00:18:45.328 "is_configured": true, 00:18:45.328 "data_offset": 2048, 00:18:45.328 "data_size": 63488 00:18:45.328 }, 00:18:45.328 { 00:18:45.328 "name": "BaseBdev4", 00:18:45.328 "uuid": "08511a86-a784-4d9c-99e9-497bdcb2eac5", 00:18:45.328 "is_configured": true, 00:18:45.328 "data_offset": 2048, 00:18:45.328 "data_size": 63488 00:18:45.328 } 00:18:45.328 ] 00:18:45.328 }' 00:18:45.328 13:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.328 13:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.897 13:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.897 13:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:46.156 13:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:46.156 13:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.156 13:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:46.415 13:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 803cee8e-592e-4e40-8881-e367fb893ba0 00:18:46.415 [2024-07-26 13:18:26.913979] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:46.415 [2024-07-26 13:18:26.914119] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x15eb2c0 00:18:46.415 [2024-07-26 13:18:26.914132] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:46.415 [2024-07-26 13:18:26.914299] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x179baa0 00:18:46.415 [2024-07-26 13:18:26.914407] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15eb2c0 00:18:46.415 [2024-07-26 13:18:26.914416] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15eb2c0 00:18:46.415 [2024-07-26 13:18:26.914502] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.415 NewBaseBdev 00:18:46.415 13:18:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:46.415 13:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:18:46.415 13:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:46.415 13:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:18:46.415 13:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:46.415 13:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:46.415 13:18:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:46.675 13:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:46.947 [ 00:18:46.947 { 00:18:46.947 "name": "NewBaseBdev", 00:18:46.947 "aliases": [ 00:18:46.947 "803cee8e-592e-4e40-8881-e367fb893ba0" 00:18:46.947 ], 00:18:46.947 "product_name": "Malloc disk", 00:18:46.947 "block_size": 512, 00:18:46.947 "num_blocks": 65536, 00:18:46.947 "uuid": "803cee8e-592e-4e40-8881-e367fb893ba0", 00:18:46.947 "assigned_rate_limits": { 00:18:46.947 "rw_ios_per_sec": 0, 00:18:46.947 "rw_mbytes_per_sec": 0, 00:18:46.947 "r_mbytes_per_sec": 0, 00:18:46.947 "w_mbytes_per_sec": 0 00:18:46.947 }, 00:18:46.947 "claimed": true, 00:18:46.947 "claim_type": "exclusive_write", 00:18:46.947 "zoned": false, 00:18:46.947 "supported_io_types": { 00:18:46.947 "read": true, 00:18:46.947 "write": true, 00:18:46.947 "unmap": true, 00:18:46.947 "flush": true, 00:18:46.947 "reset": true, 00:18:46.947 "nvme_admin": false, 00:18:46.947 "nvme_io": false, 00:18:46.947 "nvme_io_md": false, 00:18:46.947 "write_zeroes": true, 00:18:46.947 "zcopy": true, 00:18:46.947 "get_zone_info": false, 00:18:46.947 "zone_management": false, 00:18:46.947 "zone_append": false, 00:18:46.947 "compare": false, 00:18:46.947 "compare_and_write": false, 00:18:46.947 "abort": true, 00:18:46.947 "seek_hole": false, 00:18:46.947 "seek_data": false, 00:18:46.947 "copy": true, 00:18:46.947 "nvme_iov_md": false 00:18:46.947 }, 00:18:46.947 "memory_domains": [ 00:18:46.947 { 00:18:46.947 "dma_device_id": "system", 00:18:46.947 "dma_device_type": 1 00:18:46.947 }, 00:18:46.947 { 00:18:46.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.947 "dma_device_type": 2 00:18:46.947 } 00:18:46.947 ], 00:18:46.947 "driver_specific": {} 00:18:46.947 } 00:18:46.947 ] 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.947 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.216 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.216 "name": "Existed_Raid", 00:18:47.216 "uuid": "a72c5e75-0b16-4515-a0f7-80350bdc7c93", 00:18:47.216 "strip_size_kb": 64, 00:18:47.216 "state": "online", 00:18:47.216 "raid_level": "raid0", 00:18:47.216 "superblock": true, 00:18:47.216 "num_base_bdevs": 4, 00:18:47.216 "num_base_bdevs_discovered": 4, 00:18:47.216 "num_base_bdevs_operational": 4, 00:18:47.216 "base_bdevs_list": [ 00:18:47.216 { 00:18:47.216 "name": "NewBaseBdev", 00:18:47.216 "uuid": "803cee8e-592e-4e40-8881-e367fb893ba0", 00:18:47.216 "is_configured": true, 00:18:47.216 "data_offset": 2048, 00:18:47.216 "data_size": 63488 00:18:47.216 }, 00:18:47.216 { 00:18:47.216 "name": "BaseBdev2", 00:18:47.216 "uuid": "4681cf95-ddf5-47cc-828c-9be33af555d2", 00:18:47.216 "is_configured": true, 00:18:47.216 "data_offset": 2048, 00:18:47.216 "data_size": 63488 00:18:47.216 }, 00:18:47.216 { 00:18:47.216 "name": "BaseBdev3", 00:18:47.216 "uuid": "6dd3c5c5-8f97-432b-8eb8-50f61508756b", 00:18:47.216 "is_configured": true, 00:18:47.216 "data_offset": 2048, 00:18:47.216 "data_size": 63488 00:18:47.216 }, 00:18:47.216 { 00:18:47.216 "name": "BaseBdev4", 00:18:47.216 "uuid": "08511a86-a784-4d9c-99e9-497bdcb2eac5", 00:18:47.216 "is_configured": true, 00:18:47.216 "data_offset": 2048, 00:18:47.216 "data_size": 63488 00:18:47.216 } 00:18:47.216 ] 00:18:47.216 }' 00:18:47.216 13:18:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.216 13:18:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.784 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:47.784 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:47.784 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:47.784 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:47.784 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:47.784 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:47.784 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:47.784 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:48.044 [2024-07-26 13:18:28.382270] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:48.044 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:48.044 "name": "Existed_Raid", 00:18:48.044 "aliases": [ 00:18:48.044 "a72c5e75-0b16-4515-a0f7-80350bdc7c93" 00:18:48.044 ], 00:18:48.044 "product_name": "Raid Volume", 00:18:48.044 "block_size": 512, 00:18:48.044 "num_blocks": 253952, 00:18:48.044 "uuid": "a72c5e75-0b16-4515-a0f7-80350bdc7c93", 00:18:48.044 "assigned_rate_limits": { 00:18:48.044 "rw_ios_per_sec": 0, 00:18:48.044 "rw_mbytes_per_sec": 0, 00:18:48.044 "r_mbytes_per_sec": 0, 00:18:48.044 "w_mbytes_per_sec": 0 00:18:48.044 }, 00:18:48.044 "claimed": false, 00:18:48.044 "zoned": false, 00:18:48.044 "supported_io_types": { 00:18:48.044 "read": true, 00:18:48.044 "write": true, 00:18:48.044 "unmap": true, 00:18:48.044 "flush": true, 00:18:48.044 "reset": true, 00:18:48.044 "nvme_admin": false, 00:18:48.044 "nvme_io": false, 00:18:48.044 "nvme_io_md": false, 00:18:48.044 "write_zeroes": true, 00:18:48.044 "zcopy": false, 00:18:48.044 "get_zone_info": false, 00:18:48.044 "zone_management": false, 00:18:48.044 "zone_append": false, 00:18:48.044 "compare": false, 00:18:48.044 "compare_and_write": false, 00:18:48.044 "abort": false, 00:18:48.044 "seek_hole": false, 00:18:48.044 "seek_data": false, 00:18:48.044 "copy": false, 00:18:48.044 "nvme_iov_md": false 00:18:48.044 }, 00:18:48.044 "memory_domains": [ 00:18:48.044 { 00:18:48.044 "dma_device_id": "system", 00:18:48.044 "dma_device_type": 1 00:18:48.044 }, 00:18:48.044 { 00:18:48.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.044 "dma_device_type": 2 00:18:48.044 }, 00:18:48.044 { 00:18:48.044 "dma_device_id": "system", 00:18:48.044 "dma_device_type": 1 00:18:48.044 }, 00:18:48.044 { 00:18:48.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.044 "dma_device_type": 2 00:18:48.044 }, 00:18:48.044 { 00:18:48.044 "dma_device_id": "system", 00:18:48.044 "dma_device_type": 1 00:18:48.044 }, 00:18:48.044 { 00:18:48.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.044 "dma_device_type": 2 00:18:48.044 }, 00:18:48.044 { 00:18:48.044 "dma_device_id": "system", 00:18:48.044 "dma_device_type": 1 00:18:48.044 }, 00:18:48.044 { 00:18:48.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.044 "dma_device_type": 2 00:18:48.044 } 00:18:48.044 ], 00:18:48.044 "driver_specific": { 00:18:48.044 "raid": { 00:18:48.044 "uuid": "a72c5e75-0b16-4515-a0f7-80350bdc7c93", 00:18:48.044 "strip_size_kb": 64, 00:18:48.044 "state": "online", 00:18:48.044 "raid_level": "raid0", 00:18:48.044 "superblock": true, 00:18:48.044 "num_base_bdevs": 4, 00:18:48.044 "num_base_bdevs_discovered": 4, 00:18:48.044 "num_base_bdevs_operational": 4, 00:18:48.044 "base_bdevs_list": [ 00:18:48.044 { 00:18:48.044 "name": "NewBaseBdev", 00:18:48.044 "uuid": "803cee8e-592e-4e40-8881-e367fb893ba0", 00:18:48.044 "is_configured": true, 00:18:48.044 "data_offset": 2048, 00:18:48.044 "data_size": 63488 00:18:48.044 }, 00:18:48.044 { 00:18:48.044 "name": "BaseBdev2", 00:18:48.044 "uuid": "4681cf95-ddf5-47cc-828c-9be33af555d2", 00:18:48.044 "is_configured": true, 00:18:48.044 "data_offset": 2048, 00:18:48.044 "data_size": 63488 00:18:48.044 }, 00:18:48.044 { 00:18:48.044 "name": "BaseBdev3", 00:18:48.044 "uuid": "6dd3c5c5-8f97-432b-8eb8-50f61508756b", 00:18:48.044 "is_configured": true, 00:18:48.044 "data_offset": 2048, 00:18:48.044 "data_size": 63488 00:18:48.044 }, 00:18:48.044 { 00:18:48.044 "name": "BaseBdev4", 00:18:48.044 "uuid": "08511a86-a784-4d9c-99e9-497bdcb2eac5", 00:18:48.044 "is_configured": true, 00:18:48.044 "data_offset": 2048, 00:18:48.044 "data_size": 63488 00:18:48.044 } 00:18:48.044 ] 00:18:48.044 } 00:18:48.044 } 00:18:48.044 }' 00:18:48.044 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:48.044 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:48.044 BaseBdev2 00:18:48.044 BaseBdev3 00:18:48.044 BaseBdev4' 00:18:48.044 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:48.044 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:48.044 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:48.304 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:48.304 "name": "NewBaseBdev", 00:18:48.304 "aliases": [ 00:18:48.304 "803cee8e-592e-4e40-8881-e367fb893ba0" 00:18:48.304 ], 00:18:48.304 "product_name": "Malloc disk", 00:18:48.304 "block_size": 512, 00:18:48.304 "num_blocks": 65536, 00:18:48.304 "uuid": "803cee8e-592e-4e40-8881-e367fb893ba0", 00:18:48.304 "assigned_rate_limits": { 00:18:48.304 "rw_ios_per_sec": 0, 00:18:48.304 "rw_mbytes_per_sec": 0, 00:18:48.304 "r_mbytes_per_sec": 0, 00:18:48.304 "w_mbytes_per_sec": 0 00:18:48.304 }, 00:18:48.304 "claimed": true, 00:18:48.304 "claim_type": "exclusive_write", 00:18:48.304 "zoned": false, 00:18:48.304 "supported_io_types": { 00:18:48.304 "read": true, 00:18:48.304 "write": true, 00:18:48.304 "unmap": true, 00:18:48.304 "flush": true, 00:18:48.304 "reset": true, 00:18:48.304 "nvme_admin": false, 00:18:48.304 "nvme_io": false, 00:18:48.304 "nvme_io_md": false, 00:18:48.304 "write_zeroes": true, 00:18:48.304 "zcopy": true, 00:18:48.304 "get_zone_info": false, 00:18:48.304 "zone_management": false, 00:18:48.304 "zone_append": false, 00:18:48.304 "compare": false, 00:18:48.304 "compare_and_write": false, 00:18:48.304 "abort": true, 00:18:48.304 "seek_hole": false, 00:18:48.304 "seek_data": false, 00:18:48.304 "copy": true, 00:18:48.304 "nvme_iov_md": false 00:18:48.304 }, 00:18:48.304 "memory_domains": [ 00:18:48.304 { 00:18:48.304 "dma_device_id": "system", 00:18:48.304 "dma_device_type": 1 00:18:48.304 }, 00:18:48.304 { 00:18:48.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.304 "dma_device_type": 2 00:18:48.304 } 00:18:48.304 ], 00:18:48.304 "driver_specific": {} 00:18:48.304 }' 00:18:48.304 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:48.304 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:48.304 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:48.304 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:48.304 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:48.563 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:48.563 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:48.563 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:48.563 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:48.563 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:48.563 13:18:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:48.563 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:48.563 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:48.563 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:48.563 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:48.822 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:48.822 "name": "BaseBdev2", 00:18:48.822 "aliases": [ 00:18:48.822 "4681cf95-ddf5-47cc-828c-9be33af555d2" 00:18:48.822 ], 00:18:48.822 "product_name": "Malloc disk", 00:18:48.822 "block_size": 512, 00:18:48.822 "num_blocks": 65536, 00:18:48.822 "uuid": "4681cf95-ddf5-47cc-828c-9be33af555d2", 00:18:48.822 "assigned_rate_limits": { 00:18:48.822 "rw_ios_per_sec": 0, 00:18:48.822 "rw_mbytes_per_sec": 0, 00:18:48.822 "r_mbytes_per_sec": 0, 00:18:48.822 "w_mbytes_per_sec": 0 00:18:48.822 }, 00:18:48.822 "claimed": true, 00:18:48.822 "claim_type": "exclusive_write", 00:18:48.822 "zoned": false, 00:18:48.822 "supported_io_types": { 00:18:48.822 "read": true, 00:18:48.822 "write": true, 00:18:48.822 "unmap": true, 00:18:48.822 "flush": true, 00:18:48.822 "reset": true, 00:18:48.822 "nvme_admin": false, 00:18:48.822 "nvme_io": false, 00:18:48.822 "nvme_io_md": false, 00:18:48.822 "write_zeroes": true, 00:18:48.822 "zcopy": true, 00:18:48.822 "get_zone_info": false, 00:18:48.822 "zone_management": false, 00:18:48.822 "zone_append": false, 00:18:48.822 "compare": false, 00:18:48.822 "compare_and_write": false, 00:18:48.822 "abort": true, 00:18:48.822 "seek_hole": false, 00:18:48.822 "seek_data": false, 00:18:48.822 "copy": true, 00:18:48.822 "nvme_iov_md": false 00:18:48.822 }, 00:18:48.822 "memory_domains": [ 00:18:48.822 { 00:18:48.822 "dma_device_id": "system", 00:18:48.822 "dma_device_type": 1 00:18:48.822 }, 00:18:48.822 { 00:18:48.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.822 "dma_device_type": 2 00:18:48.822 } 00:18:48.822 ], 00:18:48.822 "driver_specific": {} 00:18:48.822 }' 00:18:48.822 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:48.822 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:48.822 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:49.081 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:49.081 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:49.081 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:49.081 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:49.081 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:49.081 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:49.081 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:49.081 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:49.340 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:49.340 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:49.340 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:49.340 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:49.340 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:49.340 "name": "BaseBdev3", 00:18:49.340 "aliases": [ 00:18:49.340 "6dd3c5c5-8f97-432b-8eb8-50f61508756b" 00:18:49.340 ], 00:18:49.340 "product_name": "Malloc disk", 00:18:49.340 "block_size": 512, 00:18:49.340 "num_blocks": 65536, 00:18:49.340 "uuid": "6dd3c5c5-8f97-432b-8eb8-50f61508756b", 00:18:49.340 "assigned_rate_limits": { 00:18:49.340 "rw_ios_per_sec": 0, 00:18:49.340 "rw_mbytes_per_sec": 0, 00:18:49.340 "r_mbytes_per_sec": 0, 00:18:49.340 "w_mbytes_per_sec": 0 00:18:49.340 }, 00:18:49.341 "claimed": true, 00:18:49.341 "claim_type": "exclusive_write", 00:18:49.341 "zoned": false, 00:18:49.341 "supported_io_types": { 00:18:49.341 "read": true, 00:18:49.341 "write": true, 00:18:49.341 "unmap": true, 00:18:49.341 "flush": true, 00:18:49.341 "reset": true, 00:18:49.341 "nvme_admin": false, 00:18:49.341 "nvme_io": false, 00:18:49.341 "nvme_io_md": false, 00:18:49.341 "write_zeroes": true, 00:18:49.341 "zcopy": true, 00:18:49.341 "get_zone_info": false, 00:18:49.341 "zone_management": false, 00:18:49.341 "zone_append": false, 00:18:49.341 "compare": false, 00:18:49.341 "compare_and_write": false, 00:18:49.341 "abort": true, 00:18:49.341 "seek_hole": false, 00:18:49.341 "seek_data": false, 00:18:49.341 "copy": true, 00:18:49.341 "nvme_iov_md": false 00:18:49.341 }, 00:18:49.341 "memory_domains": [ 00:18:49.341 { 00:18:49.341 "dma_device_id": "system", 00:18:49.341 "dma_device_type": 1 00:18:49.341 }, 00:18:49.341 { 00:18:49.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.341 "dma_device_type": 2 00:18:49.341 } 00:18:49.341 ], 00:18:49.341 "driver_specific": {} 00:18:49.341 }' 00:18:49.341 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:49.600 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:49.600 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:49.600 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:49.600 13:18:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:49.600 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:49.600 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:49.600 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:49.600 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:49.600 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:49.859 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:49.859 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:49.859 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:49.859 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:18:49.859 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:49.859 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:49.859 "name": "BaseBdev4", 00:18:49.859 "aliases": [ 00:18:49.859 "08511a86-a784-4d9c-99e9-497bdcb2eac5" 00:18:49.859 ], 00:18:49.859 "product_name": "Malloc disk", 00:18:49.859 "block_size": 512, 00:18:49.859 "num_blocks": 65536, 00:18:49.859 "uuid": "08511a86-a784-4d9c-99e9-497bdcb2eac5", 00:18:49.859 "assigned_rate_limits": { 00:18:49.859 "rw_ios_per_sec": 0, 00:18:49.859 "rw_mbytes_per_sec": 0, 00:18:49.859 "r_mbytes_per_sec": 0, 00:18:49.859 "w_mbytes_per_sec": 0 00:18:49.859 }, 00:18:49.859 "claimed": true, 00:18:49.859 "claim_type": "exclusive_write", 00:18:49.859 "zoned": false, 00:18:49.859 "supported_io_types": { 00:18:49.859 "read": true, 00:18:49.859 "write": true, 00:18:49.859 "unmap": true, 00:18:49.859 "flush": true, 00:18:49.859 "reset": true, 00:18:49.859 "nvme_admin": false, 00:18:49.859 "nvme_io": false, 00:18:49.859 "nvme_io_md": false, 00:18:49.859 "write_zeroes": true, 00:18:49.859 "zcopy": true, 00:18:49.859 "get_zone_info": false, 00:18:49.859 "zone_management": false, 00:18:49.859 "zone_append": false, 00:18:49.859 "compare": false, 00:18:49.859 "compare_and_write": false, 00:18:49.859 "abort": true, 00:18:49.859 "seek_hole": false, 00:18:49.859 "seek_data": false, 00:18:49.859 "copy": true, 00:18:49.859 "nvme_iov_md": false 00:18:49.859 }, 00:18:49.859 "memory_domains": [ 00:18:49.859 { 00:18:49.859 "dma_device_id": "system", 00:18:49.859 "dma_device_type": 1 00:18:49.859 }, 00:18:49.859 { 00:18:49.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.859 "dma_device_type": 2 00:18:49.859 } 00:18:49.859 ], 00:18:49.859 "driver_specific": {} 00:18:49.859 }' 00:18:50.118 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:50.118 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:50.118 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:50.118 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:50.118 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:50.118 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:50.118 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:50.118 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:50.118 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:50.118 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:50.378 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:50.378 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:50.378 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:50.637 [2024-07-26 13:18:30.932767] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:50.637 [2024-07-26 13:18:30.932790] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:50.637 [2024-07-26 13:18:30.932842] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:50.637 [2024-07-26 13:18:30.932897] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:50.637 [2024-07-26 13:18:30.932908] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15eb2c0 name Existed_Raid, state offline 00:18:50.637 13:18:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 733068 00:18:50.637 13:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 733068 ']' 00:18:50.637 13:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 733068 00:18:50.637 13:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:50.637 13:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:50.637 13:18:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 733068 00:18:50.637 13:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:50.637 13:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:50.637 13:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 733068' 00:18:50.637 killing process with pid 733068 00:18:50.637 13:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 733068 00:18:50.637 [2024-07-26 13:18:31.010225] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:50.637 13:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 733068 00:18:50.637 [2024-07-26 13:18:31.041655] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:50.896 13:18:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:50.896 00:18:50.896 real 0m30.549s 00:18:50.896 user 0m56.125s 00:18:50.896 sys 0m5.427s 00:18:50.896 13:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:50.896 13:18:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.896 ************************************ 00:18:50.896 END TEST raid_state_function_test_sb 00:18:50.896 ************************************ 00:18:50.896 13:18:31 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:50.896 13:18:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:50.896 13:18:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:50.896 13:18:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.896 ************************************ 00:18:50.896 START TEST raid_superblock_test 00:18:50.896 ************************************ 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:18:50.896 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=739443 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 739443 /var/tmp/spdk-raid.sock 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 739443 ']' 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:50.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:50.897 13:18:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.897 [2024-07-26 13:18:31.359331] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:18:50.897 [2024-07-26 13:18:31.359388] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid739443 ] 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:51.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:51.156 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:51.156 [2024-07-26 13:18:31.490509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.156 [2024-07-26 13:18:31.576843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.156 [2024-07-26 13:18:31.638214] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.156 [2024-07-26 13:18:31.638249] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.725 13:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:51.725 13:18:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:51.725 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:18:51.725 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:51.725 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:18:51.725 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:18:51.725 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:51.725 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:51.725 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:51.725 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:51.984 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:51.984 malloc1 00:18:51.984 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:52.244 [2024-07-26 13:18:32.679998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:52.244 [2024-07-26 13:18:32.680041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.244 [2024-07-26 13:18:32.680060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc1a2f0 00:18:52.244 [2024-07-26 13:18:32.680071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.244 [2024-07-26 13:18:32.681583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.244 [2024-07-26 13:18:32.681610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:52.244 pt1 00:18:52.244 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:52.244 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:52.244 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:18:52.244 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:18:52.244 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:52.244 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:52.244 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:52.244 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:52.244 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:52.503 malloc2 00:18:52.503 13:18:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:52.762 [2024-07-26 13:18:33.141770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:52.762 [2024-07-26 13:18:33.141813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.762 [2024-07-26 13:18:33.141829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc1b6d0 00:18:52.762 [2024-07-26 13:18:33.141840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.762 [2024-07-26 13:18:33.143311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.762 [2024-07-26 13:18:33.143340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:52.762 pt2 00:18:52.762 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:52.762 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:52.763 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:18:52.763 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:18:52.763 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:52.763 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:52.763 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:52.763 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:52.763 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:53.021 malloc3 00:18:53.021 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:53.290 [2024-07-26 13:18:33.599351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:53.290 [2024-07-26 13:18:33.599394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.290 [2024-07-26 13:18:33.599410] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdb46b0 00:18:53.290 [2024-07-26 13:18:33.599422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.290 [2024-07-26 13:18:33.600797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.290 [2024-07-26 13:18:33.600824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:53.290 pt3 00:18:53.290 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:53.290 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:53.290 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:18:53.290 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:18:53.290 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:53.290 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:53.290 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:53.290 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:53.290 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:53.549 malloc4 00:18:53.549 13:18:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:53.549 [2024-07-26 13:18:34.057060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:53.549 [2024-07-26 13:18:34.057104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.549 [2024-07-26 13:18:34.057121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdb2370 00:18:53.549 [2024-07-26 13:18:34.057132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.549 [2024-07-26 13:18:34.058511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.549 [2024-07-26 13:18:34.058539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:53.549 pt4 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:53.809 [2024-07-26 13:18:34.285683] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:53.809 [2024-07-26 13:18:34.286863] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:53.809 [2024-07-26 13:18:34.286915] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:53.809 [2024-07-26 13:18:34.286957] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:53.809 [2024-07-26 13:18:34.287104] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc13560 00:18:53.809 [2024-07-26 13:18:34.287115] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:53.809 [2024-07-26 13:18:34.287313] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc174d0 00:18:53.809 [2024-07-26 13:18:34.287444] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc13560 00:18:53.809 [2024-07-26 13:18:34.287453] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc13560 00:18:53.809 [2024-07-26 13:18:34.287560] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.809 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.069 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.069 "name": "raid_bdev1", 00:18:54.069 "uuid": "d3cc324a-7ed6-4b0d-815c-bff908987970", 00:18:54.069 "strip_size_kb": 64, 00:18:54.069 "state": "online", 00:18:54.069 "raid_level": "raid0", 00:18:54.069 "superblock": true, 00:18:54.069 "num_base_bdevs": 4, 00:18:54.069 "num_base_bdevs_discovered": 4, 00:18:54.069 "num_base_bdevs_operational": 4, 00:18:54.069 "base_bdevs_list": [ 00:18:54.069 { 00:18:54.069 "name": "pt1", 00:18:54.069 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:54.069 "is_configured": true, 00:18:54.069 "data_offset": 2048, 00:18:54.069 "data_size": 63488 00:18:54.069 }, 00:18:54.069 { 00:18:54.069 "name": "pt2", 00:18:54.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.069 "is_configured": true, 00:18:54.069 "data_offset": 2048, 00:18:54.069 "data_size": 63488 00:18:54.069 }, 00:18:54.069 { 00:18:54.069 "name": "pt3", 00:18:54.069 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:54.069 "is_configured": true, 00:18:54.069 "data_offset": 2048, 00:18:54.069 "data_size": 63488 00:18:54.069 }, 00:18:54.069 { 00:18:54.069 "name": "pt4", 00:18:54.069 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:54.069 "is_configured": true, 00:18:54.069 "data_offset": 2048, 00:18:54.069 "data_size": 63488 00:18:54.069 } 00:18:54.069 ] 00:18:54.069 }' 00:18:54.069 13:18:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.069 13:18:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.637 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:18:54.637 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:54.637 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:54.637 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:54.637 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:54.637 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:54.637 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:54.637 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:54.897 [2024-07-26 13:18:35.292586] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:54.897 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:54.897 "name": "raid_bdev1", 00:18:54.897 "aliases": [ 00:18:54.897 "d3cc324a-7ed6-4b0d-815c-bff908987970" 00:18:54.897 ], 00:18:54.897 "product_name": "Raid Volume", 00:18:54.897 "block_size": 512, 00:18:54.897 "num_blocks": 253952, 00:18:54.897 "uuid": "d3cc324a-7ed6-4b0d-815c-bff908987970", 00:18:54.897 "assigned_rate_limits": { 00:18:54.897 "rw_ios_per_sec": 0, 00:18:54.897 "rw_mbytes_per_sec": 0, 00:18:54.897 "r_mbytes_per_sec": 0, 00:18:54.897 "w_mbytes_per_sec": 0 00:18:54.897 }, 00:18:54.897 "claimed": false, 00:18:54.897 "zoned": false, 00:18:54.897 "supported_io_types": { 00:18:54.897 "read": true, 00:18:54.897 "write": true, 00:18:54.897 "unmap": true, 00:18:54.897 "flush": true, 00:18:54.897 "reset": true, 00:18:54.897 "nvme_admin": false, 00:18:54.897 "nvme_io": false, 00:18:54.897 "nvme_io_md": false, 00:18:54.897 "write_zeroes": true, 00:18:54.897 "zcopy": false, 00:18:54.897 "get_zone_info": false, 00:18:54.897 "zone_management": false, 00:18:54.897 "zone_append": false, 00:18:54.897 "compare": false, 00:18:54.897 "compare_and_write": false, 00:18:54.897 "abort": false, 00:18:54.897 "seek_hole": false, 00:18:54.897 "seek_data": false, 00:18:54.897 "copy": false, 00:18:54.897 "nvme_iov_md": false 00:18:54.897 }, 00:18:54.897 "memory_domains": [ 00:18:54.897 { 00:18:54.897 "dma_device_id": "system", 00:18:54.897 "dma_device_type": 1 00:18:54.897 }, 00:18:54.897 { 00:18:54.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.897 "dma_device_type": 2 00:18:54.897 }, 00:18:54.897 { 00:18:54.897 "dma_device_id": "system", 00:18:54.897 "dma_device_type": 1 00:18:54.897 }, 00:18:54.897 { 00:18:54.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.897 "dma_device_type": 2 00:18:54.897 }, 00:18:54.897 { 00:18:54.897 "dma_device_id": "system", 00:18:54.897 "dma_device_type": 1 00:18:54.897 }, 00:18:54.897 { 00:18:54.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.897 "dma_device_type": 2 00:18:54.897 }, 00:18:54.897 { 00:18:54.897 "dma_device_id": "system", 00:18:54.897 "dma_device_type": 1 00:18:54.897 }, 00:18:54.897 { 00:18:54.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.897 "dma_device_type": 2 00:18:54.897 } 00:18:54.897 ], 00:18:54.897 "driver_specific": { 00:18:54.897 "raid": { 00:18:54.897 "uuid": "d3cc324a-7ed6-4b0d-815c-bff908987970", 00:18:54.897 "strip_size_kb": 64, 00:18:54.897 "state": "online", 00:18:54.897 "raid_level": "raid0", 00:18:54.897 "superblock": true, 00:18:54.897 "num_base_bdevs": 4, 00:18:54.897 "num_base_bdevs_discovered": 4, 00:18:54.897 "num_base_bdevs_operational": 4, 00:18:54.897 "base_bdevs_list": [ 00:18:54.897 { 00:18:54.897 "name": "pt1", 00:18:54.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:54.897 "is_configured": true, 00:18:54.897 "data_offset": 2048, 00:18:54.897 "data_size": 63488 00:18:54.897 }, 00:18:54.897 { 00:18:54.897 "name": "pt2", 00:18:54.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:54.897 "is_configured": true, 00:18:54.897 "data_offset": 2048, 00:18:54.897 "data_size": 63488 00:18:54.897 }, 00:18:54.897 { 00:18:54.897 "name": "pt3", 00:18:54.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:54.897 "is_configured": true, 00:18:54.897 "data_offset": 2048, 00:18:54.897 "data_size": 63488 00:18:54.897 }, 00:18:54.897 { 00:18:54.897 "name": "pt4", 00:18:54.897 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:54.897 "is_configured": true, 00:18:54.897 "data_offset": 2048, 00:18:54.897 "data_size": 63488 00:18:54.897 } 00:18:54.897 ] 00:18:54.897 } 00:18:54.897 } 00:18:54.897 }' 00:18:54.897 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:54.897 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:54.897 pt2 00:18:54.897 pt3 00:18:54.897 pt4' 00:18:54.897 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:54.897 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:54.897 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:55.156 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:55.156 "name": "pt1", 00:18:55.156 "aliases": [ 00:18:55.156 "00000000-0000-0000-0000-000000000001" 00:18:55.156 ], 00:18:55.156 "product_name": "passthru", 00:18:55.156 "block_size": 512, 00:18:55.157 "num_blocks": 65536, 00:18:55.157 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:55.157 "assigned_rate_limits": { 00:18:55.157 "rw_ios_per_sec": 0, 00:18:55.157 "rw_mbytes_per_sec": 0, 00:18:55.157 "r_mbytes_per_sec": 0, 00:18:55.157 "w_mbytes_per_sec": 0 00:18:55.157 }, 00:18:55.157 "claimed": true, 00:18:55.157 "claim_type": "exclusive_write", 00:18:55.157 "zoned": false, 00:18:55.157 "supported_io_types": { 00:18:55.157 "read": true, 00:18:55.157 "write": true, 00:18:55.157 "unmap": true, 00:18:55.157 "flush": true, 00:18:55.157 "reset": true, 00:18:55.157 "nvme_admin": false, 00:18:55.157 "nvme_io": false, 00:18:55.157 "nvme_io_md": false, 00:18:55.157 "write_zeroes": true, 00:18:55.157 "zcopy": true, 00:18:55.157 "get_zone_info": false, 00:18:55.157 "zone_management": false, 00:18:55.157 "zone_append": false, 00:18:55.157 "compare": false, 00:18:55.157 "compare_and_write": false, 00:18:55.157 "abort": true, 00:18:55.157 "seek_hole": false, 00:18:55.157 "seek_data": false, 00:18:55.157 "copy": true, 00:18:55.157 "nvme_iov_md": false 00:18:55.157 }, 00:18:55.157 "memory_domains": [ 00:18:55.157 { 00:18:55.157 "dma_device_id": "system", 00:18:55.157 "dma_device_type": 1 00:18:55.157 }, 00:18:55.157 { 00:18:55.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.157 "dma_device_type": 2 00:18:55.157 } 00:18:55.157 ], 00:18:55.157 "driver_specific": { 00:18:55.157 "passthru": { 00:18:55.157 "name": "pt1", 00:18:55.157 "base_bdev_name": "malloc1" 00:18:55.157 } 00:18:55.157 } 00:18:55.157 }' 00:18:55.157 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.157 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.157 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:55.157 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.416 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.416 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:55.416 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.416 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.416 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:55.416 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.416 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.416 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:55.416 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:55.416 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:55.416 13:18:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:55.676 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:55.676 "name": "pt2", 00:18:55.676 "aliases": [ 00:18:55.676 "00000000-0000-0000-0000-000000000002" 00:18:55.676 ], 00:18:55.676 "product_name": "passthru", 00:18:55.676 "block_size": 512, 00:18:55.676 "num_blocks": 65536, 00:18:55.676 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:55.676 "assigned_rate_limits": { 00:18:55.676 "rw_ios_per_sec": 0, 00:18:55.676 "rw_mbytes_per_sec": 0, 00:18:55.676 "r_mbytes_per_sec": 0, 00:18:55.676 "w_mbytes_per_sec": 0 00:18:55.676 }, 00:18:55.676 "claimed": true, 00:18:55.676 "claim_type": "exclusive_write", 00:18:55.676 "zoned": false, 00:18:55.676 "supported_io_types": { 00:18:55.676 "read": true, 00:18:55.676 "write": true, 00:18:55.676 "unmap": true, 00:18:55.676 "flush": true, 00:18:55.676 "reset": true, 00:18:55.676 "nvme_admin": false, 00:18:55.676 "nvme_io": false, 00:18:55.676 "nvme_io_md": false, 00:18:55.676 "write_zeroes": true, 00:18:55.676 "zcopy": true, 00:18:55.676 "get_zone_info": false, 00:18:55.676 "zone_management": false, 00:18:55.676 "zone_append": false, 00:18:55.676 "compare": false, 00:18:55.676 "compare_and_write": false, 00:18:55.676 "abort": true, 00:18:55.676 "seek_hole": false, 00:18:55.676 "seek_data": false, 00:18:55.676 "copy": true, 00:18:55.676 "nvme_iov_md": false 00:18:55.676 }, 00:18:55.676 "memory_domains": [ 00:18:55.676 { 00:18:55.676 "dma_device_id": "system", 00:18:55.676 "dma_device_type": 1 00:18:55.676 }, 00:18:55.676 { 00:18:55.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.676 "dma_device_type": 2 00:18:55.676 } 00:18:55.676 ], 00:18:55.676 "driver_specific": { 00:18:55.676 "passthru": { 00:18:55.676 "name": "pt2", 00:18:55.676 "base_bdev_name": "malloc2" 00:18:55.676 } 00:18:55.676 } 00:18:55.676 }' 00:18:55.677 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.677 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:55.936 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:56.196 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:56.196 "name": "pt3", 00:18:56.196 "aliases": [ 00:18:56.196 "00000000-0000-0000-0000-000000000003" 00:18:56.196 ], 00:18:56.196 "product_name": "passthru", 00:18:56.196 "block_size": 512, 00:18:56.196 "num_blocks": 65536, 00:18:56.196 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:56.196 "assigned_rate_limits": { 00:18:56.196 "rw_ios_per_sec": 0, 00:18:56.196 "rw_mbytes_per_sec": 0, 00:18:56.196 "r_mbytes_per_sec": 0, 00:18:56.196 "w_mbytes_per_sec": 0 00:18:56.196 }, 00:18:56.196 "claimed": true, 00:18:56.196 "claim_type": "exclusive_write", 00:18:56.196 "zoned": false, 00:18:56.196 "supported_io_types": { 00:18:56.196 "read": true, 00:18:56.196 "write": true, 00:18:56.196 "unmap": true, 00:18:56.196 "flush": true, 00:18:56.196 "reset": true, 00:18:56.196 "nvme_admin": false, 00:18:56.196 "nvme_io": false, 00:18:56.196 "nvme_io_md": false, 00:18:56.196 "write_zeroes": true, 00:18:56.196 "zcopy": true, 00:18:56.196 "get_zone_info": false, 00:18:56.196 "zone_management": false, 00:18:56.196 "zone_append": false, 00:18:56.196 "compare": false, 00:18:56.196 "compare_and_write": false, 00:18:56.196 "abort": true, 00:18:56.196 "seek_hole": false, 00:18:56.196 "seek_data": false, 00:18:56.196 "copy": true, 00:18:56.196 "nvme_iov_md": false 00:18:56.196 }, 00:18:56.196 "memory_domains": [ 00:18:56.196 { 00:18:56.196 "dma_device_id": "system", 00:18:56.196 "dma_device_type": 1 00:18:56.196 }, 00:18:56.196 { 00:18:56.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.196 "dma_device_type": 2 00:18:56.196 } 00:18:56.196 ], 00:18:56.196 "driver_specific": { 00:18:56.196 "passthru": { 00:18:56.196 "name": "pt3", 00:18:56.196 "base_bdev_name": "malloc3" 00:18:56.196 } 00:18:56.196 } 00:18:56.196 }' 00:18:56.196 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:56.455 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:56.455 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:56.455 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:56.455 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:56.455 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:56.455 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:56.455 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:56.455 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:56.455 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:56.714 13:18:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:56.714 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:56.714 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:56.714 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:18:56.714 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:56.975 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:56.975 "name": "pt4", 00:18:56.975 "aliases": [ 00:18:56.975 "00000000-0000-0000-0000-000000000004" 00:18:56.975 ], 00:18:56.975 "product_name": "passthru", 00:18:56.975 "block_size": 512, 00:18:56.975 "num_blocks": 65536, 00:18:56.975 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:56.975 "assigned_rate_limits": { 00:18:56.975 "rw_ios_per_sec": 0, 00:18:56.975 "rw_mbytes_per_sec": 0, 00:18:56.975 "r_mbytes_per_sec": 0, 00:18:56.975 "w_mbytes_per_sec": 0 00:18:56.975 }, 00:18:56.975 "claimed": true, 00:18:56.975 "claim_type": "exclusive_write", 00:18:56.975 "zoned": false, 00:18:56.975 "supported_io_types": { 00:18:56.975 "read": true, 00:18:56.975 "write": true, 00:18:56.975 "unmap": true, 00:18:56.975 "flush": true, 00:18:56.975 "reset": true, 00:18:56.975 "nvme_admin": false, 00:18:56.975 "nvme_io": false, 00:18:56.975 "nvme_io_md": false, 00:18:56.975 "write_zeroes": true, 00:18:56.975 "zcopy": true, 00:18:56.975 "get_zone_info": false, 00:18:56.975 "zone_management": false, 00:18:56.975 "zone_append": false, 00:18:56.975 "compare": false, 00:18:56.975 "compare_and_write": false, 00:18:56.975 "abort": true, 00:18:56.975 "seek_hole": false, 00:18:56.975 "seek_data": false, 00:18:56.975 "copy": true, 00:18:56.975 "nvme_iov_md": false 00:18:56.975 }, 00:18:56.975 "memory_domains": [ 00:18:56.975 { 00:18:56.975 "dma_device_id": "system", 00:18:56.975 "dma_device_type": 1 00:18:56.975 }, 00:18:56.975 { 00:18:56.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.975 "dma_device_type": 2 00:18:56.975 } 00:18:56.975 ], 00:18:56.975 "driver_specific": { 00:18:56.975 "passthru": { 00:18:56.975 "name": "pt4", 00:18:56.975 "base_bdev_name": "malloc4" 00:18:56.975 } 00:18:56.975 } 00:18:56.975 }' 00:18:56.975 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:56.975 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:56.975 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:56.975 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:56.975 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:56.975 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:56.975 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:56.975 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:56.975 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:56.975 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:57.317 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:57.317 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:57.317 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:57.317 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:18:57.317 [2024-07-26 13:18:37.787151] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.317 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=d3cc324a-7ed6-4b0d-815c-bff908987970 00:18:57.317 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z d3cc324a-7ed6-4b0d-815c-bff908987970 ']' 00:18:57.317 13:18:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:57.577 [2024-07-26 13:18:38.011439] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.577 [2024-07-26 13:18:38.011459] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.577 [2024-07-26 13:18:38.011504] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.577 [2024-07-26 13:18:38.011563] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.577 [2024-07-26 13:18:38.011575] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc13560 name raid_bdev1, state offline 00:18:57.577 13:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:18:57.577 13:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.836 13:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:18:57.836 13:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:18:57.836 13:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:57.836 13:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:58.096 13:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:58.096 13:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:58.355 13:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:58.355 13:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:58.614 13:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:58.614 13:18:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:58.874 13:18:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:58.874 13:18:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:59.134 [2024-07-26 13:18:39.619587] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:59.134 [2024-07-26 13:18:39.620835] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:59.134 [2024-07-26 13:18:39.620875] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:59.134 [2024-07-26 13:18:39.620906] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:59.134 [2024-07-26 13:18:39.620948] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:59.134 [2024-07-26 13:18:39.620985] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:59.134 [2024-07-26 13:18:39.621006] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:59.134 [2024-07-26 13:18:39.621026] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:59.134 [2024-07-26 13:18:39.621043] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:59.134 [2024-07-26 13:18:39.621053] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc13530 name raid_bdev1, state configuring 00:18:59.134 request: 00:18:59.134 { 00:18:59.134 "name": "raid_bdev1", 00:18:59.134 "raid_level": "raid0", 00:18:59.134 "base_bdevs": [ 00:18:59.134 "malloc1", 00:18:59.134 "malloc2", 00:18:59.134 "malloc3", 00:18:59.134 "malloc4" 00:18:59.134 ], 00:18:59.134 "strip_size_kb": 64, 00:18:59.134 "superblock": false, 00:18:59.134 "method": "bdev_raid_create", 00:18:59.134 "req_id": 1 00:18:59.134 } 00:18:59.134 Got JSON-RPC error response 00:18:59.134 response: 00:18:59.134 { 00:18:59.134 "code": -17, 00:18:59.134 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:59.134 } 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.134 13:18:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:18:59.394 13:18:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:18:59.394 13:18:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:18:59.394 13:18:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:59.654 [2024-07-26 13:18:40.076739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:59.654 [2024-07-26 13:18:40.076778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.654 [2024-07-26 13:18:40.076796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdbdd50 00:18:59.654 [2024-07-26 13:18:40.076807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.654 [2024-07-26 13:18:40.078298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.654 [2024-07-26 13:18:40.078323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:59.654 [2024-07-26 13:18:40.078384] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:59.654 [2024-07-26 13:18:40.078410] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:59.654 pt1 00:18:59.654 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:59.654 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:59.654 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:59.654 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:59.654 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:59.654 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:18:59.654 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:59.654 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:59.654 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:59.654 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:59.654 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.655 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.914 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.914 "name": "raid_bdev1", 00:18:59.914 "uuid": "d3cc324a-7ed6-4b0d-815c-bff908987970", 00:18:59.914 "strip_size_kb": 64, 00:18:59.914 "state": "configuring", 00:18:59.914 "raid_level": "raid0", 00:18:59.914 "superblock": true, 00:18:59.914 "num_base_bdevs": 4, 00:18:59.914 "num_base_bdevs_discovered": 1, 00:18:59.914 "num_base_bdevs_operational": 4, 00:18:59.914 "base_bdevs_list": [ 00:18:59.914 { 00:18:59.914 "name": "pt1", 00:18:59.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:59.914 "is_configured": true, 00:18:59.914 "data_offset": 2048, 00:18:59.914 "data_size": 63488 00:18:59.914 }, 00:18:59.914 { 00:18:59.914 "name": null, 00:18:59.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:59.914 "is_configured": false, 00:18:59.914 "data_offset": 2048, 00:18:59.914 "data_size": 63488 00:18:59.914 }, 00:18:59.914 { 00:18:59.914 "name": null, 00:18:59.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:59.914 "is_configured": false, 00:18:59.914 "data_offset": 2048, 00:18:59.914 "data_size": 63488 00:18:59.914 }, 00:18:59.914 { 00:18:59.914 "name": null, 00:18:59.914 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:59.914 "is_configured": false, 00:18:59.914 "data_offset": 2048, 00:18:59.914 "data_size": 63488 00:18:59.914 } 00:18:59.914 ] 00:18:59.914 }' 00:18:59.914 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.914 13:18:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.482 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:19:00.482 13:18:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.034 [2024-07-26 13:18:41.115498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.034 [2024-07-26 13:18:41.115541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.034 [2024-07-26 13:18:41.115560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc120e0 00:19:01.034 [2024-07-26 13:18:41.115571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.034 [2024-07-26 13:18:41.115878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.034 [2024-07-26 13:18:41.115895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.034 [2024-07-26 13:18:41.115949] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:01.034 [2024-07-26 13:18:41.115969] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:01.034 pt2 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:01.034 [2024-07-26 13:18:41.336080] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.034 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.292 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.292 "name": "raid_bdev1", 00:19:01.292 "uuid": "d3cc324a-7ed6-4b0d-815c-bff908987970", 00:19:01.292 "strip_size_kb": 64, 00:19:01.292 "state": "configuring", 00:19:01.292 "raid_level": "raid0", 00:19:01.292 "superblock": true, 00:19:01.292 "num_base_bdevs": 4, 00:19:01.292 "num_base_bdevs_discovered": 1, 00:19:01.292 "num_base_bdevs_operational": 4, 00:19:01.292 "base_bdevs_list": [ 00:19:01.292 { 00:19:01.292 "name": "pt1", 00:19:01.292 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:01.292 "is_configured": true, 00:19:01.292 "data_offset": 2048, 00:19:01.292 "data_size": 63488 00:19:01.292 }, 00:19:01.292 { 00:19:01.292 "name": null, 00:19:01.292 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.292 "is_configured": false, 00:19:01.292 "data_offset": 2048, 00:19:01.292 "data_size": 63488 00:19:01.292 }, 00:19:01.292 { 00:19:01.292 "name": null, 00:19:01.292 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:01.292 "is_configured": false, 00:19:01.292 "data_offset": 2048, 00:19:01.292 "data_size": 63488 00:19:01.292 }, 00:19:01.292 { 00:19:01.292 "name": null, 00:19:01.292 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:01.292 "is_configured": false, 00:19:01.292 "data_offset": 2048, 00:19:01.292 "data_size": 63488 00:19:01.292 } 00:19:01.292 ] 00:19:01.292 }' 00:19:01.292 13:18:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.292 13:18:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.860 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:19:01.860 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:01.860 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.860 [2024-07-26 13:18:42.374824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.860 [2024-07-26 13:18:42.374869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.860 [2024-07-26 13:18:42.374886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc14ad0 00:19:01.860 [2024-07-26 13:18:42.374897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.860 [2024-07-26 13:18:42.375217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.860 [2024-07-26 13:18:42.375235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.860 [2024-07-26 13:18:42.375290] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:01.860 [2024-07-26 13:18:42.375308] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:01.860 pt2 00:19:02.119 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:02.119 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:02.119 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:02.119 [2024-07-26 13:18:42.607445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:02.119 [2024-07-26 13:18:42.607484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.119 [2024-07-26 13:18:42.607500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc1a710 00:19:02.119 [2024-07-26 13:18:42.607511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.119 [2024-07-26 13:18:42.607793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.119 [2024-07-26 13:18:42.607810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:02.119 [2024-07-26 13:18:42.607860] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:02.119 [2024-07-26 13:18:42.607878] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:02.119 pt3 00:19:02.119 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:02.119 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:02.119 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:02.379 [2024-07-26 13:18:42.824013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:02.379 [2024-07-26 13:18:42.824043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.379 [2024-07-26 13:18:42.824059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc15200 00:19:02.379 [2024-07-26 13:18:42.824070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.379 [2024-07-26 13:18:42.824337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.379 [2024-07-26 13:18:42.824353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:02.379 [2024-07-26 13:18:42.824403] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:19:02.379 [2024-07-26 13:18:42.824420] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:02.379 [2024-07-26 13:18:42.824529] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc13c20 00:19:02.379 [2024-07-26 13:18:42.824539] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:02.379 [2024-07-26 13:18:42.824690] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc172f0 00:19:02.379 [2024-07-26 13:18:42.824805] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc13c20 00:19:02.379 [2024-07-26 13:18:42.824814] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc13c20 00:19:02.379 [2024-07-26 13:18:42.824899] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.379 pt4 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.379 13:18:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.638 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.638 "name": "raid_bdev1", 00:19:02.638 "uuid": "d3cc324a-7ed6-4b0d-815c-bff908987970", 00:19:02.638 "strip_size_kb": 64, 00:19:02.638 "state": "online", 00:19:02.638 "raid_level": "raid0", 00:19:02.638 "superblock": true, 00:19:02.638 "num_base_bdevs": 4, 00:19:02.638 "num_base_bdevs_discovered": 4, 00:19:02.638 "num_base_bdevs_operational": 4, 00:19:02.638 "base_bdevs_list": [ 00:19:02.638 { 00:19:02.638 "name": "pt1", 00:19:02.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.638 "is_configured": true, 00:19:02.638 "data_offset": 2048, 00:19:02.638 "data_size": 63488 00:19:02.638 }, 00:19:02.638 { 00:19:02.638 "name": "pt2", 00:19:02.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.638 "is_configured": true, 00:19:02.638 "data_offset": 2048, 00:19:02.638 "data_size": 63488 00:19:02.638 }, 00:19:02.638 { 00:19:02.638 "name": "pt3", 00:19:02.638 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:02.638 "is_configured": true, 00:19:02.638 "data_offset": 2048, 00:19:02.638 "data_size": 63488 00:19:02.638 }, 00:19:02.638 { 00:19:02.638 "name": "pt4", 00:19:02.638 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:02.638 "is_configured": true, 00:19:02.638 "data_offset": 2048, 00:19:02.638 "data_size": 63488 00:19:02.638 } 00:19:02.638 ] 00:19:02.638 }' 00:19:02.638 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.638 13:18:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.225 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:19:03.225 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:03.225 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:03.225 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:03.225 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:03.225 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:03.225 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:03.225 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:03.485 [2024-07-26 13:18:43.830945] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.485 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:03.485 "name": "raid_bdev1", 00:19:03.485 "aliases": [ 00:19:03.485 "d3cc324a-7ed6-4b0d-815c-bff908987970" 00:19:03.485 ], 00:19:03.485 "product_name": "Raid Volume", 00:19:03.485 "block_size": 512, 00:19:03.485 "num_blocks": 253952, 00:19:03.485 "uuid": "d3cc324a-7ed6-4b0d-815c-bff908987970", 00:19:03.485 "assigned_rate_limits": { 00:19:03.485 "rw_ios_per_sec": 0, 00:19:03.485 "rw_mbytes_per_sec": 0, 00:19:03.485 "r_mbytes_per_sec": 0, 00:19:03.485 "w_mbytes_per_sec": 0 00:19:03.485 }, 00:19:03.485 "claimed": false, 00:19:03.485 "zoned": false, 00:19:03.485 "supported_io_types": { 00:19:03.485 "read": true, 00:19:03.485 "write": true, 00:19:03.485 "unmap": true, 00:19:03.485 "flush": true, 00:19:03.485 "reset": true, 00:19:03.485 "nvme_admin": false, 00:19:03.485 "nvme_io": false, 00:19:03.485 "nvme_io_md": false, 00:19:03.485 "write_zeroes": true, 00:19:03.485 "zcopy": false, 00:19:03.485 "get_zone_info": false, 00:19:03.485 "zone_management": false, 00:19:03.485 "zone_append": false, 00:19:03.485 "compare": false, 00:19:03.485 "compare_and_write": false, 00:19:03.485 "abort": false, 00:19:03.485 "seek_hole": false, 00:19:03.485 "seek_data": false, 00:19:03.485 "copy": false, 00:19:03.485 "nvme_iov_md": false 00:19:03.485 }, 00:19:03.485 "memory_domains": [ 00:19:03.485 { 00:19:03.485 "dma_device_id": "system", 00:19:03.485 "dma_device_type": 1 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.485 "dma_device_type": 2 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "dma_device_id": "system", 00:19:03.485 "dma_device_type": 1 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.485 "dma_device_type": 2 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "dma_device_id": "system", 00:19:03.485 "dma_device_type": 1 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.485 "dma_device_type": 2 00:19:03.485 }, 00:19:03.485 { 00:19:03.485 "dma_device_id": "system", 00:19:03.485 "dma_device_type": 1 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.486 "dma_device_type": 2 00:19:03.486 } 00:19:03.486 ], 00:19:03.486 "driver_specific": { 00:19:03.486 "raid": { 00:19:03.486 "uuid": "d3cc324a-7ed6-4b0d-815c-bff908987970", 00:19:03.486 "strip_size_kb": 64, 00:19:03.486 "state": "online", 00:19:03.486 "raid_level": "raid0", 00:19:03.486 "superblock": true, 00:19:03.486 "num_base_bdevs": 4, 00:19:03.486 "num_base_bdevs_discovered": 4, 00:19:03.486 "num_base_bdevs_operational": 4, 00:19:03.486 "base_bdevs_list": [ 00:19:03.486 { 00:19:03.486 "name": "pt1", 00:19:03.486 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.486 "is_configured": true, 00:19:03.486 "data_offset": 2048, 00:19:03.486 "data_size": 63488 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "name": "pt2", 00:19:03.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.486 "is_configured": true, 00:19:03.486 "data_offset": 2048, 00:19:03.486 "data_size": 63488 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "name": "pt3", 00:19:03.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:03.486 "is_configured": true, 00:19:03.486 "data_offset": 2048, 00:19:03.486 "data_size": 63488 00:19:03.486 }, 00:19:03.486 { 00:19:03.486 "name": "pt4", 00:19:03.486 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:03.486 "is_configured": true, 00:19:03.486 "data_offset": 2048, 00:19:03.486 "data_size": 63488 00:19:03.486 } 00:19:03.486 ] 00:19:03.486 } 00:19:03.486 } 00:19:03.486 }' 00:19:03.486 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:03.486 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:03.486 pt2 00:19:03.486 pt3 00:19:03.486 pt4' 00:19:03.486 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:03.486 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:03.486 13:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:03.745 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:03.745 "name": "pt1", 00:19:03.745 "aliases": [ 00:19:03.745 "00000000-0000-0000-0000-000000000001" 00:19:03.745 ], 00:19:03.745 "product_name": "passthru", 00:19:03.745 "block_size": 512, 00:19:03.745 "num_blocks": 65536, 00:19:03.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.745 "assigned_rate_limits": { 00:19:03.745 "rw_ios_per_sec": 0, 00:19:03.745 "rw_mbytes_per_sec": 0, 00:19:03.745 "r_mbytes_per_sec": 0, 00:19:03.745 "w_mbytes_per_sec": 0 00:19:03.745 }, 00:19:03.745 "claimed": true, 00:19:03.745 "claim_type": "exclusive_write", 00:19:03.745 "zoned": false, 00:19:03.745 "supported_io_types": { 00:19:03.745 "read": true, 00:19:03.745 "write": true, 00:19:03.745 "unmap": true, 00:19:03.745 "flush": true, 00:19:03.745 "reset": true, 00:19:03.745 "nvme_admin": false, 00:19:03.745 "nvme_io": false, 00:19:03.745 "nvme_io_md": false, 00:19:03.745 "write_zeroes": true, 00:19:03.745 "zcopy": true, 00:19:03.745 "get_zone_info": false, 00:19:03.745 "zone_management": false, 00:19:03.745 "zone_append": false, 00:19:03.745 "compare": false, 00:19:03.745 "compare_and_write": false, 00:19:03.745 "abort": true, 00:19:03.745 "seek_hole": false, 00:19:03.745 "seek_data": false, 00:19:03.745 "copy": true, 00:19:03.745 "nvme_iov_md": false 00:19:03.745 }, 00:19:03.745 "memory_domains": [ 00:19:03.745 { 00:19:03.745 "dma_device_id": "system", 00:19:03.745 "dma_device_type": 1 00:19:03.745 }, 00:19:03.745 { 00:19:03.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.745 "dma_device_type": 2 00:19:03.745 } 00:19:03.745 ], 00:19:03.745 "driver_specific": { 00:19:03.745 "passthru": { 00:19:03.745 "name": "pt1", 00:19:03.745 "base_bdev_name": "malloc1" 00:19:03.745 } 00:19:03.745 } 00:19:03.745 }' 00:19:03.745 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:03.745 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:03.745 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:03.745 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:03.745 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.005 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:04.005 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:04.005 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:04.005 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:04.005 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:04.005 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:04.005 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:04.005 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:04.005 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:04.005 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:04.264 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:04.264 "name": "pt2", 00:19:04.264 "aliases": [ 00:19:04.264 "00000000-0000-0000-0000-000000000002" 00:19:04.264 ], 00:19:04.264 "product_name": "passthru", 00:19:04.264 "block_size": 512, 00:19:04.264 "num_blocks": 65536, 00:19:04.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.264 "assigned_rate_limits": { 00:19:04.264 "rw_ios_per_sec": 0, 00:19:04.264 "rw_mbytes_per_sec": 0, 00:19:04.264 "r_mbytes_per_sec": 0, 00:19:04.264 "w_mbytes_per_sec": 0 00:19:04.264 }, 00:19:04.264 "claimed": true, 00:19:04.264 "claim_type": "exclusive_write", 00:19:04.264 "zoned": false, 00:19:04.264 "supported_io_types": { 00:19:04.264 "read": true, 00:19:04.264 "write": true, 00:19:04.264 "unmap": true, 00:19:04.264 "flush": true, 00:19:04.264 "reset": true, 00:19:04.264 "nvme_admin": false, 00:19:04.264 "nvme_io": false, 00:19:04.264 "nvme_io_md": false, 00:19:04.264 "write_zeroes": true, 00:19:04.264 "zcopy": true, 00:19:04.264 "get_zone_info": false, 00:19:04.264 "zone_management": false, 00:19:04.264 "zone_append": false, 00:19:04.264 "compare": false, 00:19:04.264 "compare_and_write": false, 00:19:04.264 "abort": true, 00:19:04.264 "seek_hole": false, 00:19:04.264 "seek_data": false, 00:19:04.264 "copy": true, 00:19:04.264 "nvme_iov_md": false 00:19:04.264 }, 00:19:04.264 "memory_domains": [ 00:19:04.264 { 00:19:04.264 "dma_device_id": "system", 00:19:04.264 "dma_device_type": 1 00:19:04.264 }, 00:19:04.264 { 00:19:04.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.264 "dma_device_type": 2 00:19:04.264 } 00:19:04.264 ], 00:19:04.264 "driver_specific": { 00:19:04.264 "passthru": { 00:19:04.264 "name": "pt2", 00:19:04.264 "base_bdev_name": "malloc2" 00:19:04.264 } 00:19:04.264 } 00:19:04.264 }' 00:19:04.264 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:04.264 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:04.264 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:04.264 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.523 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:04.523 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:04.523 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:04.523 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:04.523 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:04.523 13:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:04.523 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:04.523 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:04.523 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:04.523 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:04.524 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:05.092 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.092 "name": "pt3", 00:19:05.092 "aliases": [ 00:19:05.092 "00000000-0000-0000-0000-000000000003" 00:19:05.092 ], 00:19:05.092 "product_name": "passthru", 00:19:05.092 "block_size": 512, 00:19:05.092 "num_blocks": 65536, 00:19:05.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:05.092 "assigned_rate_limits": { 00:19:05.092 "rw_ios_per_sec": 0, 00:19:05.092 "rw_mbytes_per_sec": 0, 00:19:05.092 "r_mbytes_per_sec": 0, 00:19:05.092 "w_mbytes_per_sec": 0 00:19:05.092 }, 00:19:05.092 "claimed": true, 00:19:05.092 "claim_type": "exclusive_write", 00:19:05.092 "zoned": false, 00:19:05.092 "supported_io_types": { 00:19:05.092 "read": true, 00:19:05.092 "write": true, 00:19:05.092 "unmap": true, 00:19:05.092 "flush": true, 00:19:05.092 "reset": true, 00:19:05.092 "nvme_admin": false, 00:19:05.092 "nvme_io": false, 00:19:05.092 "nvme_io_md": false, 00:19:05.092 "write_zeroes": true, 00:19:05.092 "zcopy": true, 00:19:05.092 "get_zone_info": false, 00:19:05.092 "zone_management": false, 00:19:05.092 "zone_append": false, 00:19:05.092 "compare": false, 00:19:05.092 "compare_and_write": false, 00:19:05.092 "abort": true, 00:19:05.092 "seek_hole": false, 00:19:05.092 "seek_data": false, 00:19:05.092 "copy": true, 00:19:05.092 "nvme_iov_md": false 00:19:05.092 }, 00:19:05.092 "memory_domains": [ 00:19:05.092 { 00:19:05.092 "dma_device_id": "system", 00:19:05.092 "dma_device_type": 1 00:19:05.092 }, 00:19:05.092 { 00:19:05.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.092 "dma_device_type": 2 00:19:05.092 } 00:19:05.092 ], 00:19:05.092 "driver_specific": { 00:19:05.092 "passthru": { 00:19:05.092 "name": "pt3", 00:19:05.092 "base_bdev_name": "malloc3" 00:19:05.092 } 00:19:05.092 } 00:19:05.092 }' 00:19:05.092 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.092 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.351 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.351 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.351 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.351 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.351 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.351 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.351 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.351 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.351 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.351 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.351 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:05.351 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:19:05.611 13:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:05.611 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.611 "name": "pt4", 00:19:05.611 "aliases": [ 00:19:05.611 "00000000-0000-0000-0000-000000000004" 00:19:05.611 ], 00:19:05.611 "product_name": "passthru", 00:19:05.611 "block_size": 512, 00:19:05.611 "num_blocks": 65536, 00:19:05.611 "uuid": "00000000-0000-0000-0000-000000000004", 00:19:05.611 "assigned_rate_limits": { 00:19:05.611 "rw_ios_per_sec": 0, 00:19:05.611 "rw_mbytes_per_sec": 0, 00:19:05.611 "r_mbytes_per_sec": 0, 00:19:05.611 "w_mbytes_per_sec": 0 00:19:05.611 }, 00:19:05.611 "claimed": true, 00:19:05.611 "claim_type": "exclusive_write", 00:19:05.611 "zoned": false, 00:19:05.611 "supported_io_types": { 00:19:05.611 "read": true, 00:19:05.611 "write": true, 00:19:05.611 "unmap": true, 00:19:05.611 "flush": true, 00:19:05.611 "reset": true, 00:19:05.611 "nvme_admin": false, 00:19:05.611 "nvme_io": false, 00:19:05.611 "nvme_io_md": false, 00:19:05.611 "write_zeroes": true, 00:19:05.611 "zcopy": true, 00:19:05.611 "get_zone_info": false, 00:19:05.611 "zone_management": false, 00:19:05.611 "zone_append": false, 00:19:05.611 "compare": false, 00:19:05.611 "compare_and_write": false, 00:19:05.611 "abort": true, 00:19:05.611 "seek_hole": false, 00:19:05.611 "seek_data": false, 00:19:05.611 "copy": true, 00:19:05.611 "nvme_iov_md": false 00:19:05.611 }, 00:19:05.611 "memory_domains": [ 00:19:05.611 { 00:19:05.611 "dma_device_id": "system", 00:19:05.611 "dma_device_type": 1 00:19:05.611 }, 00:19:05.611 { 00:19:05.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.611 "dma_device_type": 2 00:19:05.611 } 00:19:05.611 ], 00:19:05.611 "driver_specific": { 00:19:05.611 "passthru": { 00:19:05.611 "name": "pt4", 00:19:05.611 "base_bdev_name": "malloc4" 00:19:05.611 } 00:19:05.611 } 00:19:05.611 }' 00:19:05.611 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.870 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.870 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.870 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.870 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.870 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.870 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.870 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.870 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.870 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:06.129 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:06.129 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:06.129 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:06.129 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:19:06.389 [2024-07-26 13:18:46.674460] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' d3cc324a-7ed6-4b0d-815c-bff908987970 '!=' d3cc324a-7ed6-4b0d-815c-bff908987970 ']' 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 739443 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 739443 ']' 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 739443 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 739443 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 739443' 00:19:06.389 killing process with pid 739443 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 739443 00:19:06.389 [2024-07-26 13:18:46.745437] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:06.389 13:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 739443 00:19:06.389 [2024-07-26 13:18:46.745493] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:06.389 [2024-07-26 13:18:46.745557] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:06.389 [2024-07-26 13:18:46.745568] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc13c20 name raid_bdev1, state offline 00:19:06.389 [2024-07-26 13:18:46.777320] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.649 13:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:19:06.649 00:19:06.649 real 0m15.666s 00:19:06.649 user 0m28.301s 00:19:06.649 sys 0m2.754s 00:19:06.649 13:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.649 13:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.649 ************************************ 00:19:06.649 END TEST raid_superblock_test 00:19:06.649 ************************************ 00:19:06.649 13:18:47 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:19:06.649 13:18:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:06.649 13:18:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:06.649 13:18:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.649 ************************************ 00:19:06.649 START TEST raid_read_error_test 00:19:06.649 ************************************ 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.FHnjmIRHtd 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=742413 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 742413 /var/tmp/spdk-raid.sock 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 742413 ']' 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:06.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.649 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:06.649 [2024-07-26 13:18:47.117103] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:19:06.649 [2024-07-26 13:18:47.117164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid742413 ] 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:06.909 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.909 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:06.909 [2024-07-26 13:18:47.248150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.909 [2024-07-26 13:18:47.333840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.909 [2024-07-26 13:18:47.389400] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.909 [2024-07-26 13:18:47.389425] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.478 13:18:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.478 13:18:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:07.478 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:07.478 13:18:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:07.738 BaseBdev1_malloc 00:19:07.738 13:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:07.997 true 00:19:07.997 13:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:07.997 [2024-07-26 13:18:48.457054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:07.997 [2024-07-26 13:18:48.457091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.997 [2024-07-26 13:18:48.457108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a61190 00:19:07.997 [2024-07-26 13:18:48.457119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.997 [2024-07-26 13:18:48.458643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.998 [2024-07-26 13:18:48.458671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:07.998 BaseBdev1 00:19:07.998 13:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:07.998 13:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:08.257 BaseBdev2_malloc 00:19:08.257 13:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:08.516 true 00:19:08.516 13:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:09.084 [2024-07-26 13:18:49.327679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:09.084 [2024-07-26 13:18:49.327718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.084 [2024-07-26 13:18:49.327741] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a65e20 00:19:09.084 [2024-07-26 13:18:49.327752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.084 [2024-07-26 13:18:49.329147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.084 [2024-07-26 13:18:49.329174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:09.084 BaseBdev2 00:19:09.084 13:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:09.084 13:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:09.084 BaseBdev3_malloc 00:19:09.084 13:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:09.343 true 00:19:09.344 13:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:09.603 [2024-07-26 13:18:49.873316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:09.603 [2024-07-26 13:18:49.873354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.603 [2024-07-26 13:18:49.873374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a66d90 00:19:09.603 [2024-07-26 13:18:49.873385] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.603 [2024-07-26 13:18:49.874759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.603 [2024-07-26 13:18:49.874786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:09.603 BaseBdev3 00:19:09.603 13:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:09.603 13:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:09.863 BaseBdev4_malloc 00:19:10.122 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:19:10.122 true 00:19:10.122 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:10.382 [2024-07-26 13:18:50.751963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:10.382 [2024-07-26 13:18:50.752001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.382 [2024-07-26 13:18:50.752019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a69000 00:19:10.382 [2024-07-26 13:18:50.752030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.382 [2024-07-26 13:18:50.753421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.382 [2024-07-26 13:18:50.753448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:10.382 BaseBdev4 00:19:10.382 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:19:10.642 [2024-07-26 13:18:50.976585] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.642 [2024-07-26 13:18:50.977672] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:10.642 [2024-07-26 13:18:50.977735] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:10.642 [2024-07-26 13:18:50.977788] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:10.642 [2024-07-26 13:18:50.977986] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a69dd0 00:19:10.642 [2024-07-26 13:18:50.978000] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:10.642 [2024-07-26 13:18:50.978182] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a54ec0 00:19:10.642 [2024-07-26 13:18:50.978312] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a69dd0 00:19:10.642 [2024-07-26 13:18:50.978321] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1a69dd0 00:19:10.642 [2024-07-26 13:18:50.978424] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.642 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:10.642 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:10.642 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:10.642 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:10.642 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:10.642 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:10.642 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:10.642 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:10.642 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:10.642 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:10.642 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.642 13:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.901 13:18:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:10.901 "name": "raid_bdev1", 00:19:10.901 "uuid": "6aefe36e-3176-4997-84ce-6111b314c4e7", 00:19:10.901 "strip_size_kb": 64, 00:19:10.901 "state": "online", 00:19:10.901 "raid_level": "raid0", 00:19:10.901 "superblock": true, 00:19:10.901 "num_base_bdevs": 4, 00:19:10.901 "num_base_bdevs_discovered": 4, 00:19:10.901 "num_base_bdevs_operational": 4, 00:19:10.901 "base_bdevs_list": [ 00:19:10.901 { 00:19:10.901 "name": "BaseBdev1", 00:19:10.901 "uuid": "1e04919f-4ba4-57d0-b05c-0148e7731cf2", 00:19:10.901 "is_configured": true, 00:19:10.902 "data_offset": 2048, 00:19:10.902 "data_size": 63488 00:19:10.902 }, 00:19:10.902 { 00:19:10.902 "name": "BaseBdev2", 00:19:10.902 "uuid": "0e59f056-f6a5-5873-b2d2-20d3b85f7991", 00:19:10.902 "is_configured": true, 00:19:10.902 "data_offset": 2048, 00:19:10.902 "data_size": 63488 00:19:10.902 }, 00:19:10.902 { 00:19:10.902 "name": "BaseBdev3", 00:19:10.902 "uuid": "8d4b1c86-a8f7-53bc-af47-a98aa57a34be", 00:19:10.902 "is_configured": true, 00:19:10.902 "data_offset": 2048, 00:19:10.902 "data_size": 63488 00:19:10.902 }, 00:19:10.902 { 00:19:10.902 "name": "BaseBdev4", 00:19:10.902 "uuid": "fb4fad16-f2e6-52c1-bca9-99f6bc70e771", 00:19:10.902 "is_configured": true, 00:19:10.902 "data_offset": 2048, 00:19:10.902 "data_size": 63488 00:19:10.902 } 00:19:10.902 ] 00:19:10.902 }' 00:19:10.902 13:18:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:10.902 13:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.519 13:18:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:19:11.519 13:18:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:11.519 [2024-07-26 13:18:51.843099] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a6ac90 00:19:12.460 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:12.719 13:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.719 13:18:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.719 13:18:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:12.719 "name": "raid_bdev1", 00:19:12.719 "uuid": "6aefe36e-3176-4997-84ce-6111b314c4e7", 00:19:12.719 "strip_size_kb": 64, 00:19:12.719 "state": "online", 00:19:12.719 "raid_level": "raid0", 00:19:12.719 "superblock": true, 00:19:12.719 "num_base_bdevs": 4, 00:19:12.719 "num_base_bdevs_discovered": 4, 00:19:12.720 "num_base_bdevs_operational": 4, 00:19:12.720 "base_bdevs_list": [ 00:19:12.720 { 00:19:12.720 "name": "BaseBdev1", 00:19:12.720 "uuid": "1e04919f-4ba4-57d0-b05c-0148e7731cf2", 00:19:12.720 "is_configured": true, 00:19:12.720 "data_offset": 2048, 00:19:12.720 "data_size": 63488 00:19:12.720 }, 00:19:12.720 { 00:19:12.720 "name": "BaseBdev2", 00:19:12.720 "uuid": "0e59f056-f6a5-5873-b2d2-20d3b85f7991", 00:19:12.720 "is_configured": true, 00:19:12.720 "data_offset": 2048, 00:19:12.720 "data_size": 63488 00:19:12.720 }, 00:19:12.720 { 00:19:12.720 "name": "BaseBdev3", 00:19:12.720 "uuid": "8d4b1c86-a8f7-53bc-af47-a98aa57a34be", 00:19:12.720 "is_configured": true, 00:19:12.720 "data_offset": 2048, 00:19:12.720 "data_size": 63488 00:19:12.720 }, 00:19:12.720 { 00:19:12.720 "name": "BaseBdev4", 00:19:12.720 "uuid": "fb4fad16-f2e6-52c1-bca9-99f6bc70e771", 00:19:12.720 "is_configured": true, 00:19:12.720 "data_offset": 2048, 00:19:12.720 "data_size": 63488 00:19:12.720 } 00:19:12.720 ] 00:19:12.720 }' 00:19:12.720 13:18:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:12.720 13:18:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.657 13:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:13.917 [2024-07-26 13:18:54.282313] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:13.917 [2024-07-26 13:18:54.282345] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:13.917 [2024-07-26 13:18:54.285261] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.917 [2024-07-26 13:18:54.285297] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.917 [2024-07-26 13:18:54.285332] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.917 [2024-07-26 13:18:54.285342] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a69dd0 name raid_bdev1, state offline 00:19:13.917 0 00:19:13.917 13:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 742413 00:19:13.917 13:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 742413 ']' 00:19:13.917 13:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 742413 00:19:13.917 13:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:19:13.917 13:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.917 13:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 742413 00:19:13.917 13:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:13.917 13:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:13.917 13:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 742413' 00:19:13.917 killing process with pid 742413 00:19:13.917 13:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 742413 00:19:13.917 [2024-07-26 13:18:54.361104] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:13.917 13:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 742413 00:19:13.917 [2024-07-26 13:18:54.387962] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:14.178 13:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.FHnjmIRHtd 00:19:14.178 13:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:19:14.178 13:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:19:14.178 13:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.41 00:19:14.178 13:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:19:14.178 13:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:14.178 13:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:14.178 13:18:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.41 != \0\.\0\0 ]] 00:19:14.178 00:19:14.178 real 0m7.548s 00:19:14.178 user 0m12.134s 00:19:14.178 sys 0m1.266s 00:19:14.178 13:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:14.178 13:18:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.178 ************************************ 00:19:14.178 END TEST raid_read_error_test 00:19:14.178 ************************************ 00:19:14.178 13:18:54 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:19:14.178 13:18:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:14.178 13:18:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:14.178 13:18:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.178 ************************************ 00:19:14.178 START TEST raid_write_error_test 00:19:14.178 ************************************ 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:19:14.178 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.PlE4j03Ml2 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=743833 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 743833 /var/tmp/spdk-raid.sock 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 743833 ']' 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:14.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.179 13:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.439 [2024-07-26 13:18:54.751351] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:19:14.439 [2024-07-26 13:18:54.751394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid743833 ] 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:14.439 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:14.439 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:14.439 [2024-07-26 13:18:54.867686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.439 [2024-07-26 13:18:54.949150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.699 [2024-07-26 13:18:55.003126] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:14.699 [2024-07-26 13:18:55.003162] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.268 13:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.268 13:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:15.268 13:18:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:15.268 13:18:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:15.527 BaseBdev1_malloc 00:19:15.527 13:18:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:15.527 true 00:19:15.785 13:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:15.786 [2024-07-26 13:18:56.275129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:15.786 [2024-07-26 13:18:56.275176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:15.786 [2024-07-26 13:18:56.275193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb46190 00:19:15.786 [2024-07-26 13:18:56.275205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:15.786 [2024-07-26 13:18:56.276698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:15.786 [2024-07-26 13:18:56.276726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:15.786 BaseBdev1 00:19:15.786 13:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:15.786 13:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:16.044 BaseBdev2_malloc 00:19:16.044 13:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:16.303 true 00:19:16.303 13:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:16.561 [2024-07-26 13:18:56.949014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:16.561 [2024-07-26 13:18:56.949051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.562 [2024-07-26 13:18:56.949067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb4ae20 00:19:16.562 [2024-07-26 13:18:56.949078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.562 [2024-07-26 13:18:56.950332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.562 [2024-07-26 13:18:56.950358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:16.562 BaseBdev2 00:19:16.562 13:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:16.562 13:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:16.820 BaseBdev3_malloc 00:19:16.820 13:18:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:17.080 true 00:19:17.080 13:18:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:17.339 [2024-07-26 13:18:57.638889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:17.339 [2024-07-26 13:18:57.638924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.339 [2024-07-26 13:18:57.638942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb4bd90 00:19:17.339 [2024-07-26 13:18:57.638953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.339 [2024-07-26 13:18:57.640222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.339 [2024-07-26 13:18:57.640248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:17.339 BaseBdev3 00:19:17.339 13:18:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:17.339 13:18:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:17.599 BaseBdev4_malloc 00:19:17.599 13:18:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:19:17.599 true 00:19:17.599 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:19:17.858 [2024-07-26 13:18:58.324822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:19:17.859 [2024-07-26 13:18:58.324859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.859 [2024-07-26 13:18:58.324875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb4e000 00:19:17.859 [2024-07-26 13:18:58.324886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.859 [2024-07-26 13:18:58.326132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.859 [2024-07-26 13:18:58.326166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:17.859 BaseBdev4 00:19:17.859 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:19:18.118 [2024-07-26 13:18:58.545438] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:18.118 [2024-07-26 13:18:58.546500] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.118 [2024-07-26 13:18:58.546561] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:18.118 [2024-07-26 13:18:58.546613] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:18.118 [2024-07-26 13:18:58.546806] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xb4edd0 00:19:18.118 [2024-07-26 13:18:58.546817] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:18.118 [2024-07-26 13:18:58.546984] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb39ec0 00:19:18.118 [2024-07-26 13:18:58.547111] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb4edd0 00:19:18.118 [2024-07-26 13:18:58.547120] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xb4edd0 00:19:18.118 [2024-07-26 13:18:58.547229] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.118 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:18.118 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:18.118 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:18.118 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:18.118 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.118 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:18.118 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.118 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.118 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.118 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.118 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.118 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.377 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.377 "name": "raid_bdev1", 00:19:18.377 "uuid": "f184b8fc-9ef0-4a74-9181-7e0c86f3b155", 00:19:18.377 "strip_size_kb": 64, 00:19:18.377 "state": "online", 00:19:18.377 "raid_level": "raid0", 00:19:18.377 "superblock": true, 00:19:18.377 "num_base_bdevs": 4, 00:19:18.377 "num_base_bdevs_discovered": 4, 00:19:18.377 "num_base_bdevs_operational": 4, 00:19:18.377 "base_bdevs_list": [ 00:19:18.377 { 00:19:18.377 "name": "BaseBdev1", 00:19:18.377 "uuid": "cb7e8b4f-efef-50b6-b913-e3888a810282", 00:19:18.377 "is_configured": true, 00:19:18.377 "data_offset": 2048, 00:19:18.377 "data_size": 63488 00:19:18.377 }, 00:19:18.377 { 00:19:18.377 "name": "BaseBdev2", 00:19:18.377 "uuid": "cff7df40-3396-5163-927f-cabdfb76b1df", 00:19:18.377 "is_configured": true, 00:19:18.377 "data_offset": 2048, 00:19:18.377 "data_size": 63488 00:19:18.377 }, 00:19:18.377 { 00:19:18.377 "name": "BaseBdev3", 00:19:18.377 "uuid": "d787bd60-2d95-5cb6-9319-0596f63cbce2", 00:19:18.377 "is_configured": true, 00:19:18.377 "data_offset": 2048, 00:19:18.377 "data_size": 63488 00:19:18.377 }, 00:19:18.377 { 00:19:18.377 "name": "BaseBdev4", 00:19:18.377 "uuid": "8c3d038b-6011-54f6-837b-b9aa97082d36", 00:19:18.377 "is_configured": true, 00:19:18.377 "data_offset": 2048, 00:19:18.377 "data_size": 63488 00:19:18.377 } 00:19:18.377 ] 00:19:18.377 }' 00:19:18.377 13:18:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.377 13:18:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.945 13:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:19:18.945 13:18:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:19.205 [2024-07-26 13:18:59.476150] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb4fc90 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.143 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.402 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:20.402 "name": "raid_bdev1", 00:19:20.402 "uuid": "f184b8fc-9ef0-4a74-9181-7e0c86f3b155", 00:19:20.402 "strip_size_kb": 64, 00:19:20.402 "state": "online", 00:19:20.402 "raid_level": "raid0", 00:19:20.402 "superblock": true, 00:19:20.402 "num_base_bdevs": 4, 00:19:20.402 "num_base_bdevs_discovered": 4, 00:19:20.402 "num_base_bdevs_operational": 4, 00:19:20.402 "base_bdevs_list": [ 00:19:20.402 { 00:19:20.402 "name": "BaseBdev1", 00:19:20.402 "uuid": "cb7e8b4f-efef-50b6-b913-e3888a810282", 00:19:20.402 "is_configured": true, 00:19:20.402 "data_offset": 2048, 00:19:20.402 "data_size": 63488 00:19:20.402 }, 00:19:20.402 { 00:19:20.402 "name": "BaseBdev2", 00:19:20.402 "uuid": "cff7df40-3396-5163-927f-cabdfb76b1df", 00:19:20.402 "is_configured": true, 00:19:20.402 "data_offset": 2048, 00:19:20.402 "data_size": 63488 00:19:20.402 }, 00:19:20.402 { 00:19:20.402 "name": "BaseBdev3", 00:19:20.402 "uuid": "d787bd60-2d95-5cb6-9319-0596f63cbce2", 00:19:20.402 "is_configured": true, 00:19:20.402 "data_offset": 2048, 00:19:20.402 "data_size": 63488 00:19:20.402 }, 00:19:20.402 { 00:19:20.402 "name": "BaseBdev4", 00:19:20.402 "uuid": "8c3d038b-6011-54f6-837b-b9aa97082d36", 00:19:20.402 "is_configured": true, 00:19:20.402 "data_offset": 2048, 00:19:20.402 "data_size": 63488 00:19:20.402 } 00:19:20.402 ] 00:19:20.402 }' 00:19:20.402 13:19:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:20.402 13:19:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.969 13:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:21.229 [2024-07-26 13:19:01.642114] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:21.229 [2024-07-26 13:19:01.642163] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.229 [2024-07-26 13:19:01.645079] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.229 [2024-07-26 13:19:01.645118] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.229 [2024-07-26 13:19:01.645164] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.229 [2024-07-26 13:19:01.645175] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb4edd0 name raid_bdev1, state offline 00:19:21.229 0 00:19:21.229 13:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 743833 00:19:21.229 13:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 743833 ']' 00:19:21.229 13:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 743833 00:19:21.229 13:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:19:21.229 13:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.229 13:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 743833 00:19:21.229 13:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:21.229 13:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:21.229 13:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 743833' 00:19:21.229 killing process with pid 743833 00:19:21.229 13:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 743833 00:19:21.229 [2024-07-26 13:19:01.719453] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:21.229 13:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 743833 00:19:21.229 [2024-07-26 13:19:01.746087] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:21.488 13:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.PlE4j03Ml2 00:19:21.488 13:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:19:21.488 13:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:19:21.488 13:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:19:21.488 13:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:19:21.488 13:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:21.488 13:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:21.488 13:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:19:21.488 00:19:21.488 real 0m7.277s 00:19:21.488 user 0m11.598s 00:19:21.488 sys 0m1.269s 00:19:21.488 13:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:21.488 13:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.488 ************************************ 00:19:21.488 END TEST raid_write_error_test 00:19:21.488 ************************************ 00:19:21.488 13:19:02 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:19:21.488 13:19:02 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:19:21.488 13:19:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:21.488 13:19:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:21.488 13:19:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:21.748 ************************************ 00:19:21.748 START TEST raid_state_function_test 00:19:21.748 ************************************ 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=745030 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 745030' 00:19:21.748 Process raid pid: 745030 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 745030 /var/tmp/spdk-raid.sock 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 745030 ']' 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:21.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.748 13:19:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.748 [2024-07-26 13:19:02.111033] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:19:21.748 [2024-07-26 13:19:02.111090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.748 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:21.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.748 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:21.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.748 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:21.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.748 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:21.748 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:21.749 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:21.749 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:21.749 [2024-07-26 13:19:02.245178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.008 [2024-07-26 13:19:02.330115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.008 [2024-07-26 13:19:02.393148] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.008 [2024-07-26 13:19:02.393185] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.576 13:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.576 13:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:19:22.576 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:22.841 [2024-07-26 13:19:03.211065] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:22.841 [2024-07-26 13:19:03.211106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:22.841 [2024-07-26 13:19:03.211120] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:22.841 [2024-07-26 13:19:03.211131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:22.841 [2024-07-26 13:19:03.211144] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:22.841 [2024-07-26 13:19:03.211155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:22.841 [2024-07-26 13:19:03.211163] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:22.841 [2024-07-26 13:19:03.211174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:22.841 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:22.841 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:22.841 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:22.841 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:22.841 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:22.841 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:22.841 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:22.841 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:22.841 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:22.841 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:22.841 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.841 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.105 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:23.105 "name": "Existed_Raid", 00:19:23.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.105 "strip_size_kb": 64, 00:19:23.105 "state": "configuring", 00:19:23.105 "raid_level": "concat", 00:19:23.105 "superblock": false, 00:19:23.105 "num_base_bdevs": 4, 00:19:23.105 "num_base_bdevs_discovered": 0, 00:19:23.105 "num_base_bdevs_operational": 4, 00:19:23.105 "base_bdevs_list": [ 00:19:23.105 { 00:19:23.105 "name": "BaseBdev1", 00:19:23.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.105 "is_configured": false, 00:19:23.105 "data_offset": 0, 00:19:23.105 "data_size": 0 00:19:23.105 }, 00:19:23.105 { 00:19:23.105 "name": "BaseBdev2", 00:19:23.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.106 "is_configured": false, 00:19:23.106 "data_offset": 0, 00:19:23.106 "data_size": 0 00:19:23.106 }, 00:19:23.106 { 00:19:23.106 "name": "BaseBdev3", 00:19:23.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.106 "is_configured": false, 00:19:23.106 "data_offset": 0, 00:19:23.106 "data_size": 0 00:19:23.106 }, 00:19:23.106 { 00:19:23.106 "name": "BaseBdev4", 00:19:23.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.106 "is_configured": false, 00:19:23.106 "data_offset": 0, 00:19:23.106 "data_size": 0 00:19:23.106 } 00:19:23.106 ] 00:19:23.106 }' 00:19:23.106 13:19:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:23.106 13:19:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.673 13:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:23.932 [2024-07-26 13:19:04.249673] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:23.932 [2024-07-26 13:19:04.249706] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe7af60 name Existed_Raid, state configuring 00:19:23.932 13:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:24.192 [2024-07-26 13:19:04.478289] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:24.192 [2024-07-26 13:19:04.478317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:24.192 [2024-07-26 13:19:04.478326] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:24.192 [2024-07-26 13:19:04.478337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:24.192 [2024-07-26 13:19:04.478345] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:24.192 [2024-07-26 13:19:04.478355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:24.192 [2024-07-26 13:19:04.478363] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:24.192 [2024-07-26 13:19:04.478373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:24.192 13:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:24.192 [2024-07-26 13:19:04.712456] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:24.192 BaseBdev1 00:19:24.451 13:19:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:24.451 13:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:24.451 13:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:24.451 13:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:24.451 13:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:24.451 13:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:24.451 13:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:24.451 13:19:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:24.711 [ 00:19:24.711 { 00:19:24.711 "name": "BaseBdev1", 00:19:24.711 "aliases": [ 00:19:24.711 "38e5fa2f-8371-4281-9790-90278f22871a" 00:19:24.711 ], 00:19:24.711 "product_name": "Malloc disk", 00:19:24.711 "block_size": 512, 00:19:24.711 "num_blocks": 65536, 00:19:24.711 "uuid": "38e5fa2f-8371-4281-9790-90278f22871a", 00:19:24.711 "assigned_rate_limits": { 00:19:24.711 "rw_ios_per_sec": 0, 00:19:24.711 "rw_mbytes_per_sec": 0, 00:19:24.711 "r_mbytes_per_sec": 0, 00:19:24.711 "w_mbytes_per_sec": 0 00:19:24.711 }, 00:19:24.711 "claimed": true, 00:19:24.711 "claim_type": "exclusive_write", 00:19:24.711 "zoned": false, 00:19:24.711 "supported_io_types": { 00:19:24.711 "read": true, 00:19:24.711 "write": true, 00:19:24.711 "unmap": true, 00:19:24.711 "flush": true, 00:19:24.711 "reset": true, 00:19:24.711 "nvme_admin": false, 00:19:24.711 "nvme_io": false, 00:19:24.711 "nvme_io_md": false, 00:19:24.711 "write_zeroes": true, 00:19:24.711 "zcopy": true, 00:19:24.711 "get_zone_info": false, 00:19:24.711 "zone_management": false, 00:19:24.711 "zone_append": false, 00:19:24.711 "compare": false, 00:19:24.711 "compare_and_write": false, 00:19:24.711 "abort": true, 00:19:24.711 "seek_hole": false, 00:19:24.711 "seek_data": false, 00:19:24.711 "copy": true, 00:19:24.711 "nvme_iov_md": false 00:19:24.711 }, 00:19:24.711 "memory_domains": [ 00:19:24.711 { 00:19:24.711 "dma_device_id": "system", 00:19:24.711 "dma_device_type": 1 00:19:24.711 }, 00:19:24.711 { 00:19:24.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.711 "dma_device_type": 2 00:19:24.711 } 00:19:24.711 ], 00:19:24.711 "driver_specific": {} 00:19:24.711 } 00:19:24.711 ] 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.711 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.971 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:24.971 "name": "Existed_Raid", 00:19:24.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.971 "strip_size_kb": 64, 00:19:24.971 "state": "configuring", 00:19:24.971 "raid_level": "concat", 00:19:24.971 "superblock": false, 00:19:24.971 "num_base_bdevs": 4, 00:19:24.971 "num_base_bdevs_discovered": 1, 00:19:24.971 "num_base_bdevs_operational": 4, 00:19:24.971 "base_bdevs_list": [ 00:19:24.971 { 00:19:24.971 "name": "BaseBdev1", 00:19:24.971 "uuid": "38e5fa2f-8371-4281-9790-90278f22871a", 00:19:24.971 "is_configured": true, 00:19:24.971 "data_offset": 0, 00:19:24.971 "data_size": 65536 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "name": "BaseBdev2", 00:19:24.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.971 "is_configured": false, 00:19:24.971 "data_offset": 0, 00:19:24.971 "data_size": 0 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "name": "BaseBdev3", 00:19:24.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.971 "is_configured": false, 00:19:24.971 "data_offset": 0, 00:19:24.971 "data_size": 0 00:19:24.971 }, 00:19:24.971 { 00:19:24.971 "name": "BaseBdev4", 00:19:24.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.971 "is_configured": false, 00:19:24.971 "data_offset": 0, 00:19:24.971 "data_size": 0 00:19:24.971 } 00:19:24.971 ] 00:19:24.971 }' 00:19:24.971 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:24.971 13:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.539 13:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:25.798 [2024-07-26 13:19:06.184326] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:25.798 [2024-07-26 13:19:06.184364] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe7a7d0 name Existed_Raid, state configuring 00:19:25.798 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:26.096 [2024-07-26 13:19:06.412964] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:26.096 [2024-07-26 13:19:06.414350] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:26.096 [2024-07-26 13:19:06.414385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:26.096 [2024-07-26 13:19:06.414396] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:26.096 [2024-07-26 13:19:06.414407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:26.096 [2024-07-26 13:19:06.414415] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:26.096 [2024-07-26 13:19:06.414426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.096 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.371 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:26.371 "name": "Existed_Raid", 00:19:26.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.371 "strip_size_kb": 64, 00:19:26.371 "state": "configuring", 00:19:26.371 "raid_level": "concat", 00:19:26.371 "superblock": false, 00:19:26.371 "num_base_bdevs": 4, 00:19:26.371 "num_base_bdevs_discovered": 1, 00:19:26.371 "num_base_bdevs_operational": 4, 00:19:26.371 "base_bdevs_list": [ 00:19:26.371 { 00:19:26.371 "name": "BaseBdev1", 00:19:26.371 "uuid": "38e5fa2f-8371-4281-9790-90278f22871a", 00:19:26.371 "is_configured": true, 00:19:26.371 "data_offset": 0, 00:19:26.371 "data_size": 65536 00:19:26.371 }, 00:19:26.371 { 00:19:26.371 "name": "BaseBdev2", 00:19:26.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.371 "is_configured": false, 00:19:26.371 "data_offset": 0, 00:19:26.371 "data_size": 0 00:19:26.371 }, 00:19:26.371 { 00:19:26.371 "name": "BaseBdev3", 00:19:26.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.371 "is_configured": false, 00:19:26.371 "data_offset": 0, 00:19:26.371 "data_size": 0 00:19:26.371 }, 00:19:26.371 { 00:19:26.371 "name": "BaseBdev4", 00:19:26.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.371 "is_configured": false, 00:19:26.371 "data_offset": 0, 00:19:26.371 "data_size": 0 00:19:26.371 } 00:19:26.371 ] 00:19:26.371 }' 00:19:26.371 13:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:26.371 13:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.977 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:26.977 [2024-07-26 13:19:07.454795] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:26.977 BaseBdev2 00:19:26.977 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:26.977 13:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:26.977 13:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:26.977 13:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:26.977 13:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:26.977 13:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:26.977 13:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:27.236 13:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:27.496 [ 00:19:27.496 { 00:19:27.496 "name": "BaseBdev2", 00:19:27.496 "aliases": [ 00:19:27.496 "99c216b3-8f08-4cb5-9b8f-39af69567556" 00:19:27.496 ], 00:19:27.496 "product_name": "Malloc disk", 00:19:27.496 "block_size": 512, 00:19:27.496 "num_blocks": 65536, 00:19:27.496 "uuid": "99c216b3-8f08-4cb5-9b8f-39af69567556", 00:19:27.496 "assigned_rate_limits": { 00:19:27.496 "rw_ios_per_sec": 0, 00:19:27.496 "rw_mbytes_per_sec": 0, 00:19:27.496 "r_mbytes_per_sec": 0, 00:19:27.496 "w_mbytes_per_sec": 0 00:19:27.496 }, 00:19:27.496 "claimed": true, 00:19:27.496 "claim_type": "exclusive_write", 00:19:27.496 "zoned": false, 00:19:27.496 "supported_io_types": { 00:19:27.496 "read": true, 00:19:27.496 "write": true, 00:19:27.496 "unmap": true, 00:19:27.496 "flush": true, 00:19:27.496 "reset": true, 00:19:27.496 "nvme_admin": false, 00:19:27.496 "nvme_io": false, 00:19:27.496 "nvme_io_md": false, 00:19:27.496 "write_zeroes": true, 00:19:27.496 "zcopy": true, 00:19:27.496 "get_zone_info": false, 00:19:27.496 "zone_management": false, 00:19:27.496 "zone_append": false, 00:19:27.496 "compare": false, 00:19:27.496 "compare_and_write": false, 00:19:27.496 "abort": true, 00:19:27.496 "seek_hole": false, 00:19:27.496 "seek_data": false, 00:19:27.496 "copy": true, 00:19:27.496 "nvme_iov_md": false 00:19:27.496 }, 00:19:27.496 "memory_domains": [ 00:19:27.496 { 00:19:27.496 "dma_device_id": "system", 00:19:27.496 "dma_device_type": 1 00:19:27.496 }, 00:19:27.496 { 00:19:27.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.496 "dma_device_type": 2 00:19:27.496 } 00:19:27.496 ], 00:19:27.496 "driver_specific": {} 00:19:27.497 } 00:19:27.497 ] 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.497 13:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.756 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.756 "name": "Existed_Raid", 00:19:27.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.756 "strip_size_kb": 64, 00:19:27.756 "state": "configuring", 00:19:27.756 "raid_level": "concat", 00:19:27.756 "superblock": false, 00:19:27.756 "num_base_bdevs": 4, 00:19:27.756 "num_base_bdevs_discovered": 2, 00:19:27.756 "num_base_bdevs_operational": 4, 00:19:27.756 "base_bdevs_list": [ 00:19:27.756 { 00:19:27.756 "name": "BaseBdev1", 00:19:27.756 "uuid": "38e5fa2f-8371-4281-9790-90278f22871a", 00:19:27.756 "is_configured": true, 00:19:27.756 "data_offset": 0, 00:19:27.756 "data_size": 65536 00:19:27.756 }, 00:19:27.756 { 00:19:27.756 "name": "BaseBdev2", 00:19:27.756 "uuid": "99c216b3-8f08-4cb5-9b8f-39af69567556", 00:19:27.756 "is_configured": true, 00:19:27.756 "data_offset": 0, 00:19:27.756 "data_size": 65536 00:19:27.756 }, 00:19:27.756 { 00:19:27.756 "name": "BaseBdev3", 00:19:27.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.756 "is_configured": false, 00:19:27.756 "data_offset": 0, 00:19:27.756 "data_size": 0 00:19:27.756 }, 00:19:27.756 { 00:19:27.756 "name": "BaseBdev4", 00:19:27.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.756 "is_configured": false, 00:19:27.756 "data_offset": 0, 00:19:27.756 "data_size": 0 00:19:27.756 } 00:19:27.756 ] 00:19:27.756 }' 00:19:27.756 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.756 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.325 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:28.584 [2024-07-26 13:19:08.942168] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:28.584 BaseBdev3 00:19:28.584 13:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:28.584 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:28.584 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:28.584 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:28.584 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:28.584 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:28.584 13:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:28.843 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:28.843 [ 00:19:28.843 { 00:19:28.843 "name": "BaseBdev3", 00:19:28.843 "aliases": [ 00:19:28.843 "39f872f9-75cc-4188-9229-f642097b83c5" 00:19:28.843 ], 00:19:28.843 "product_name": "Malloc disk", 00:19:28.843 "block_size": 512, 00:19:28.843 "num_blocks": 65536, 00:19:28.843 "uuid": "39f872f9-75cc-4188-9229-f642097b83c5", 00:19:28.843 "assigned_rate_limits": { 00:19:28.843 "rw_ios_per_sec": 0, 00:19:28.843 "rw_mbytes_per_sec": 0, 00:19:28.843 "r_mbytes_per_sec": 0, 00:19:28.843 "w_mbytes_per_sec": 0 00:19:28.843 }, 00:19:28.843 "claimed": true, 00:19:28.843 "claim_type": "exclusive_write", 00:19:28.843 "zoned": false, 00:19:28.843 "supported_io_types": { 00:19:28.843 "read": true, 00:19:28.843 "write": true, 00:19:28.843 "unmap": true, 00:19:28.843 "flush": true, 00:19:28.843 "reset": true, 00:19:28.843 "nvme_admin": false, 00:19:28.843 "nvme_io": false, 00:19:28.843 "nvme_io_md": false, 00:19:28.843 "write_zeroes": true, 00:19:28.843 "zcopy": true, 00:19:28.843 "get_zone_info": false, 00:19:28.843 "zone_management": false, 00:19:28.843 "zone_append": false, 00:19:28.843 "compare": false, 00:19:28.843 "compare_and_write": false, 00:19:28.843 "abort": true, 00:19:28.843 "seek_hole": false, 00:19:28.843 "seek_data": false, 00:19:28.843 "copy": true, 00:19:28.843 "nvme_iov_md": false 00:19:28.843 }, 00:19:28.843 "memory_domains": [ 00:19:28.843 { 00:19:28.843 "dma_device_id": "system", 00:19:28.843 "dma_device_type": 1 00:19:28.843 }, 00:19:28.843 { 00:19:28.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.844 "dma_device_type": 2 00:19:28.844 } 00:19:28.844 ], 00:19:28.844 "driver_specific": {} 00:19:28.844 } 00:19:28.844 ] 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.844 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.103 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.103 "name": "Existed_Raid", 00:19:29.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.103 "strip_size_kb": 64, 00:19:29.103 "state": "configuring", 00:19:29.103 "raid_level": "concat", 00:19:29.103 "superblock": false, 00:19:29.103 "num_base_bdevs": 4, 00:19:29.103 "num_base_bdevs_discovered": 3, 00:19:29.103 "num_base_bdevs_operational": 4, 00:19:29.103 "base_bdevs_list": [ 00:19:29.103 { 00:19:29.103 "name": "BaseBdev1", 00:19:29.103 "uuid": "38e5fa2f-8371-4281-9790-90278f22871a", 00:19:29.103 "is_configured": true, 00:19:29.103 "data_offset": 0, 00:19:29.103 "data_size": 65536 00:19:29.103 }, 00:19:29.103 { 00:19:29.103 "name": "BaseBdev2", 00:19:29.103 "uuid": "99c216b3-8f08-4cb5-9b8f-39af69567556", 00:19:29.103 "is_configured": true, 00:19:29.103 "data_offset": 0, 00:19:29.103 "data_size": 65536 00:19:29.103 }, 00:19:29.103 { 00:19:29.103 "name": "BaseBdev3", 00:19:29.103 "uuid": "39f872f9-75cc-4188-9229-f642097b83c5", 00:19:29.103 "is_configured": true, 00:19:29.103 "data_offset": 0, 00:19:29.103 "data_size": 65536 00:19:29.103 }, 00:19:29.103 { 00:19:29.103 "name": "BaseBdev4", 00:19:29.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.103 "is_configured": false, 00:19:29.103 "data_offset": 0, 00:19:29.103 "data_size": 0 00:19:29.103 } 00:19:29.103 ] 00:19:29.103 }' 00:19:29.103 13:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.103 13:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.671 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:29.930 [2024-07-26 13:19:10.304956] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:29.930 [2024-07-26 13:19:10.304991] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xe7b840 00:19:29.930 [2024-07-26 13:19:10.304999] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:29.930 [2024-07-26 13:19:10.305187] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe7b480 00:19:29.930 [2024-07-26 13:19:10.305306] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe7b840 00:19:29.930 [2024-07-26 13:19:10.305316] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe7b840 00:19:29.930 [2024-07-26 13:19:10.305465] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.930 BaseBdev4 00:19:29.930 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:19:29.930 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:29.930 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:29.930 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:29.930 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:29.930 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:29.930 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:30.189 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:30.189 [ 00:19:30.189 { 00:19:30.189 "name": "BaseBdev4", 00:19:30.189 "aliases": [ 00:19:30.189 "c562ac0c-a96d-4e4a-9b34-41df595fcaa5" 00:19:30.189 ], 00:19:30.189 "product_name": "Malloc disk", 00:19:30.189 "block_size": 512, 00:19:30.189 "num_blocks": 65536, 00:19:30.189 "uuid": "c562ac0c-a96d-4e4a-9b34-41df595fcaa5", 00:19:30.189 "assigned_rate_limits": { 00:19:30.189 "rw_ios_per_sec": 0, 00:19:30.189 "rw_mbytes_per_sec": 0, 00:19:30.189 "r_mbytes_per_sec": 0, 00:19:30.189 "w_mbytes_per_sec": 0 00:19:30.189 }, 00:19:30.189 "claimed": true, 00:19:30.189 "claim_type": "exclusive_write", 00:19:30.189 "zoned": false, 00:19:30.189 "supported_io_types": { 00:19:30.189 "read": true, 00:19:30.189 "write": true, 00:19:30.189 "unmap": true, 00:19:30.189 "flush": true, 00:19:30.189 "reset": true, 00:19:30.189 "nvme_admin": false, 00:19:30.189 "nvme_io": false, 00:19:30.189 "nvme_io_md": false, 00:19:30.189 "write_zeroes": true, 00:19:30.189 "zcopy": true, 00:19:30.189 "get_zone_info": false, 00:19:30.189 "zone_management": false, 00:19:30.189 "zone_append": false, 00:19:30.189 "compare": false, 00:19:30.189 "compare_and_write": false, 00:19:30.189 "abort": true, 00:19:30.189 "seek_hole": false, 00:19:30.189 "seek_data": false, 00:19:30.189 "copy": true, 00:19:30.189 "nvme_iov_md": false 00:19:30.189 }, 00:19:30.189 "memory_domains": [ 00:19:30.189 { 00:19:30.189 "dma_device_id": "system", 00:19:30.189 "dma_device_type": 1 00:19:30.189 }, 00:19:30.189 { 00:19:30.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.189 "dma_device_type": 2 00:19:30.189 } 00:19:30.189 ], 00:19:30.189 "driver_specific": {} 00:19:30.189 } 00:19:30.189 ] 00:19:30.448 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:30.448 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:30.448 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:30.448 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:30.448 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:30.448 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:30.448 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:30.448 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:30.448 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:30.448 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.448 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.449 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.449 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.449 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.449 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.449 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.449 "name": "Existed_Raid", 00:19:30.449 "uuid": "a70b81a2-25f6-42a8-a85e-b752e8ef311d", 00:19:30.449 "strip_size_kb": 64, 00:19:30.449 "state": "online", 00:19:30.449 "raid_level": "concat", 00:19:30.449 "superblock": false, 00:19:30.449 "num_base_bdevs": 4, 00:19:30.449 "num_base_bdevs_discovered": 4, 00:19:30.449 "num_base_bdevs_operational": 4, 00:19:30.449 "base_bdevs_list": [ 00:19:30.449 { 00:19:30.449 "name": "BaseBdev1", 00:19:30.449 "uuid": "38e5fa2f-8371-4281-9790-90278f22871a", 00:19:30.449 "is_configured": true, 00:19:30.449 "data_offset": 0, 00:19:30.449 "data_size": 65536 00:19:30.449 }, 00:19:30.449 { 00:19:30.449 "name": "BaseBdev2", 00:19:30.449 "uuid": "99c216b3-8f08-4cb5-9b8f-39af69567556", 00:19:30.449 "is_configured": true, 00:19:30.449 "data_offset": 0, 00:19:30.449 "data_size": 65536 00:19:30.449 }, 00:19:30.449 { 00:19:30.449 "name": "BaseBdev3", 00:19:30.449 "uuid": "39f872f9-75cc-4188-9229-f642097b83c5", 00:19:30.449 "is_configured": true, 00:19:30.449 "data_offset": 0, 00:19:30.449 "data_size": 65536 00:19:30.449 }, 00:19:30.449 { 00:19:30.449 "name": "BaseBdev4", 00:19:30.449 "uuid": "c562ac0c-a96d-4e4a-9b34-41df595fcaa5", 00:19:30.449 "is_configured": true, 00:19:30.449 "data_offset": 0, 00:19:30.449 "data_size": 65536 00:19:30.449 } 00:19:30.449 ] 00:19:30.449 }' 00:19:30.449 13:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.449 13:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.016 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:31.016 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:31.016 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:31.016 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:31.016 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:31.016 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:31.016 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:31.016 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:31.275 [2024-07-26 13:19:11.737034] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.275 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:31.275 "name": "Existed_Raid", 00:19:31.275 "aliases": [ 00:19:31.275 "a70b81a2-25f6-42a8-a85e-b752e8ef311d" 00:19:31.275 ], 00:19:31.275 "product_name": "Raid Volume", 00:19:31.275 "block_size": 512, 00:19:31.275 "num_blocks": 262144, 00:19:31.275 "uuid": "a70b81a2-25f6-42a8-a85e-b752e8ef311d", 00:19:31.275 "assigned_rate_limits": { 00:19:31.275 "rw_ios_per_sec": 0, 00:19:31.275 "rw_mbytes_per_sec": 0, 00:19:31.275 "r_mbytes_per_sec": 0, 00:19:31.275 "w_mbytes_per_sec": 0 00:19:31.275 }, 00:19:31.275 "claimed": false, 00:19:31.275 "zoned": false, 00:19:31.275 "supported_io_types": { 00:19:31.275 "read": true, 00:19:31.275 "write": true, 00:19:31.275 "unmap": true, 00:19:31.275 "flush": true, 00:19:31.275 "reset": true, 00:19:31.275 "nvme_admin": false, 00:19:31.275 "nvme_io": false, 00:19:31.275 "nvme_io_md": false, 00:19:31.275 "write_zeroes": true, 00:19:31.275 "zcopy": false, 00:19:31.275 "get_zone_info": false, 00:19:31.275 "zone_management": false, 00:19:31.275 "zone_append": false, 00:19:31.275 "compare": false, 00:19:31.275 "compare_and_write": false, 00:19:31.275 "abort": false, 00:19:31.275 "seek_hole": false, 00:19:31.275 "seek_data": false, 00:19:31.275 "copy": false, 00:19:31.275 "nvme_iov_md": false 00:19:31.275 }, 00:19:31.275 "memory_domains": [ 00:19:31.275 { 00:19:31.275 "dma_device_id": "system", 00:19:31.275 "dma_device_type": 1 00:19:31.275 }, 00:19:31.275 { 00:19:31.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.275 "dma_device_type": 2 00:19:31.275 }, 00:19:31.275 { 00:19:31.275 "dma_device_id": "system", 00:19:31.275 "dma_device_type": 1 00:19:31.275 }, 00:19:31.275 { 00:19:31.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.275 "dma_device_type": 2 00:19:31.275 }, 00:19:31.275 { 00:19:31.275 "dma_device_id": "system", 00:19:31.275 "dma_device_type": 1 00:19:31.275 }, 00:19:31.275 { 00:19:31.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.275 "dma_device_type": 2 00:19:31.275 }, 00:19:31.275 { 00:19:31.275 "dma_device_id": "system", 00:19:31.275 "dma_device_type": 1 00:19:31.275 }, 00:19:31.275 { 00:19:31.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.275 "dma_device_type": 2 00:19:31.275 } 00:19:31.275 ], 00:19:31.275 "driver_specific": { 00:19:31.275 "raid": { 00:19:31.275 "uuid": "a70b81a2-25f6-42a8-a85e-b752e8ef311d", 00:19:31.275 "strip_size_kb": 64, 00:19:31.275 "state": "online", 00:19:31.275 "raid_level": "concat", 00:19:31.275 "superblock": false, 00:19:31.275 "num_base_bdevs": 4, 00:19:31.275 "num_base_bdevs_discovered": 4, 00:19:31.275 "num_base_bdevs_operational": 4, 00:19:31.275 "base_bdevs_list": [ 00:19:31.275 { 00:19:31.275 "name": "BaseBdev1", 00:19:31.275 "uuid": "38e5fa2f-8371-4281-9790-90278f22871a", 00:19:31.275 "is_configured": true, 00:19:31.275 "data_offset": 0, 00:19:31.275 "data_size": 65536 00:19:31.275 }, 00:19:31.275 { 00:19:31.275 "name": "BaseBdev2", 00:19:31.275 "uuid": "99c216b3-8f08-4cb5-9b8f-39af69567556", 00:19:31.275 "is_configured": true, 00:19:31.275 "data_offset": 0, 00:19:31.275 "data_size": 65536 00:19:31.275 }, 00:19:31.275 { 00:19:31.275 "name": "BaseBdev3", 00:19:31.275 "uuid": "39f872f9-75cc-4188-9229-f642097b83c5", 00:19:31.275 "is_configured": true, 00:19:31.275 "data_offset": 0, 00:19:31.275 "data_size": 65536 00:19:31.275 }, 00:19:31.275 { 00:19:31.275 "name": "BaseBdev4", 00:19:31.275 "uuid": "c562ac0c-a96d-4e4a-9b34-41df595fcaa5", 00:19:31.275 "is_configured": true, 00:19:31.275 "data_offset": 0, 00:19:31.275 "data_size": 65536 00:19:31.275 } 00:19:31.275 ] 00:19:31.275 } 00:19:31.275 } 00:19:31.275 }' 00:19:31.275 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:31.275 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:31.275 BaseBdev2 00:19:31.275 BaseBdev3 00:19:31.275 BaseBdev4' 00:19:31.534 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:31.534 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:31.534 13:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:31.534 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:31.534 "name": "BaseBdev1", 00:19:31.534 "aliases": [ 00:19:31.534 "38e5fa2f-8371-4281-9790-90278f22871a" 00:19:31.534 ], 00:19:31.534 "product_name": "Malloc disk", 00:19:31.534 "block_size": 512, 00:19:31.534 "num_blocks": 65536, 00:19:31.534 "uuid": "38e5fa2f-8371-4281-9790-90278f22871a", 00:19:31.534 "assigned_rate_limits": { 00:19:31.534 "rw_ios_per_sec": 0, 00:19:31.534 "rw_mbytes_per_sec": 0, 00:19:31.534 "r_mbytes_per_sec": 0, 00:19:31.534 "w_mbytes_per_sec": 0 00:19:31.534 }, 00:19:31.534 "claimed": true, 00:19:31.534 "claim_type": "exclusive_write", 00:19:31.534 "zoned": false, 00:19:31.534 "supported_io_types": { 00:19:31.534 "read": true, 00:19:31.534 "write": true, 00:19:31.534 "unmap": true, 00:19:31.534 "flush": true, 00:19:31.534 "reset": true, 00:19:31.534 "nvme_admin": false, 00:19:31.534 "nvme_io": false, 00:19:31.534 "nvme_io_md": false, 00:19:31.534 "write_zeroes": true, 00:19:31.534 "zcopy": true, 00:19:31.534 "get_zone_info": false, 00:19:31.534 "zone_management": false, 00:19:31.534 "zone_append": false, 00:19:31.534 "compare": false, 00:19:31.534 "compare_and_write": false, 00:19:31.534 "abort": true, 00:19:31.534 "seek_hole": false, 00:19:31.534 "seek_data": false, 00:19:31.534 "copy": true, 00:19:31.534 "nvme_iov_md": false 00:19:31.534 }, 00:19:31.534 "memory_domains": [ 00:19:31.534 { 00:19:31.534 "dma_device_id": "system", 00:19:31.534 "dma_device_type": 1 00:19:31.534 }, 00:19:31.534 { 00:19:31.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.534 "dma_device_type": 2 00:19:31.534 } 00:19:31.534 ], 00:19:31.534 "driver_specific": {} 00:19:31.534 }' 00:19:31.534 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.793 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.793 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:31.793 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.793 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.793 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:31.793 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.794 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.794 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:31.794 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.052 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.052 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:32.052 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:32.052 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:32.052 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:32.311 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:32.311 "name": "BaseBdev2", 00:19:32.311 "aliases": [ 00:19:32.311 "99c216b3-8f08-4cb5-9b8f-39af69567556" 00:19:32.311 ], 00:19:32.311 "product_name": "Malloc disk", 00:19:32.311 "block_size": 512, 00:19:32.311 "num_blocks": 65536, 00:19:32.311 "uuid": "99c216b3-8f08-4cb5-9b8f-39af69567556", 00:19:32.311 "assigned_rate_limits": { 00:19:32.311 "rw_ios_per_sec": 0, 00:19:32.311 "rw_mbytes_per_sec": 0, 00:19:32.311 "r_mbytes_per_sec": 0, 00:19:32.311 "w_mbytes_per_sec": 0 00:19:32.311 }, 00:19:32.311 "claimed": true, 00:19:32.311 "claim_type": "exclusive_write", 00:19:32.311 "zoned": false, 00:19:32.311 "supported_io_types": { 00:19:32.311 "read": true, 00:19:32.311 "write": true, 00:19:32.311 "unmap": true, 00:19:32.311 "flush": true, 00:19:32.311 "reset": true, 00:19:32.311 "nvme_admin": false, 00:19:32.311 "nvme_io": false, 00:19:32.311 "nvme_io_md": false, 00:19:32.311 "write_zeroes": true, 00:19:32.311 "zcopy": true, 00:19:32.311 "get_zone_info": false, 00:19:32.311 "zone_management": false, 00:19:32.311 "zone_append": false, 00:19:32.311 "compare": false, 00:19:32.311 "compare_and_write": false, 00:19:32.311 "abort": true, 00:19:32.311 "seek_hole": false, 00:19:32.311 "seek_data": false, 00:19:32.311 "copy": true, 00:19:32.311 "nvme_iov_md": false 00:19:32.311 }, 00:19:32.311 "memory_domains": [ 00:19:32.311 { 00:19:32.311 "dma_device_id": "system", 00:19:32.311 "dma_device_type": 1 00:19:32.311 }, 00:19:32.311 { 00:19:32.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.311 "dma_device_type": 2 00:19:32.311 } 00:19:32.311 ], 00:19:32.311 "driver_specific": {} 00:19:32.311 }' 00:19:32.311 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.311 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.311 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:32.311 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.311 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.311 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:32.311 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.311 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.570 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:32.570 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.570 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.570 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:32.570 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:32.570 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:32.570 13:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:32.829 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:32.829 "name": "BaseBdev3", 00:19:32.829 "aliases": [ 00:19:32.829 "39f872f9-75cc-4188-9229-f642097b83c5" 00:19:32.829 ], 00:19:32.829 "product_name": "Malloc disk", 00:19:32.829 "block_size": 512, 00:19:32.829 "num_blocks": 65536, 00:19:32.829 "uuid": "39f872f9-75cc-4188-9229-f642097b83c5", 00:19:32.829 "assigned_rate_limits": { 00:19:32.829 "rw_ios_per_sec": 0, 00:19:32.829 "rw_mbytes_per_sec": 0, 00:19:32.829 "r_mbytes_per_sec": 0, 00:19:32.829 "w_mbytes_per_sec": 0 00:19:32.829 }, 00:19:32.829 "claimed": true, 00:19:32.829 "claim_type": "exclusive_write", 00:19:32.829 "zoned": false, 00:19:32.829 "supported_io_types": { 00:19:32.829 "read": true, 00:19:32.829 "write": true, 00:19:32.829 "unmap": true, 00:19:32.829 "flush": true, 00:19:32.829 "reset": true, 00:19:32.829 "nvme_admin": false, 00:19:32.829 "nvme_io": false, 00:19:32.829 "nvme_io_md": false, 00:19:32.829 "write_zeroes": true, 00:19:32.829 "zcopy": true, 00:19:32.829 "get_zone_info": false, 00:19:32.829 "zone_management": false, 00:19:32.829 "zone_append": false, 00:19:32.829 "compare": false, 00:19:32.829 "compare_and_write": false, 00:19:32.829 "abort": true, 00:19:32.829 "seek_hole": false, 00:19:32.829 "seek_data": false, 00:19:32.829 "copy": true, 00:19:32.829 "nvme_iov_md": false 00:19:32.829 }, 00:19:32.829 "memory_domains": [ 00:19:32.829 { 00:19:32.829 "dma_device_id": "system", 00:19:32.829 "dma_device_type": 1 00:19:32.829 }, 00:19:32.829 { 00:19:32.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.829 "dma_device_type": 2 00:19:32.829 } 00:19:32.829 ], 00:19:32.829 "driver_specific": {} 00:19:32.829 }' 00:19:32.829 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.829 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.829 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:32.829 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.829 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.829 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:32.829 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.087 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.087 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:33.087 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.087 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.087 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:33.087 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:33.087 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:33.088 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:33.346 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:33.346 "name": "BaseBdev4", 00:19:33.346 "aliases": [ 00:19:33.346 "c562ac0c-a96d-4e4a-9b34-41df595fcaa5" 00:19:33.346 ], 00:19:33.346 "product_name": "Malloc disk", 00:19:33.346 "block_size": 512, 00:19:33.346 "num_blocks": 65536, 00:19:33.346 "uuid": "c562ac0c-a96d-4e4a-9b34-41df595fcaa5", 00:19:33.346 "assigned_rate_limits": { 00:19:33.346 "rw_ios_per_sec": 0, 00:19:33.346 "rw_mbytes_per_sec": 0, 00:19:33.346 "r_mbytes_per_sec": 0, 00:19:33.346 "w_mbytes_per_sec": 0 00:19:33.346 }, 00:19:33.346 "claimed": true, 00:19:33.346 "claim_type": "exclusive_write", 00:19:33.346 "zoned": false, 00:19:33.346 "supported_io_types": { 00:19:33.346 "read": true, 00:19:33.346 "write": true, 00:19:33.346 "unmap": true, 00:19:33.346 "flush": true, 00:19:33.346 "reset": true, 00:19:33.346 "nvme_admin": false, 00:19:33.346 "nvme_io": false, 00:19:33.346 "nvme_io_md": false, 00:19:33.346 "write_zeroes": true, 00:19:33.346 "zcopy": true, 00:19:33.346 "get_zone_info": false, 00:19:33.346 "zone_management": false, 00:19:33.346 "zone_append": false, 00:19:33.346 "compare": false, 00:19:33.346 "compare_and_write": false, 00:19:33.346 "abort": true, 00:19:33.346 "seek_hole": false, 00:19:33.346 "seek_data": false, 00:19:33.346 "copy": true, 00:19:33.346 "nvme_iov_md": false 00:19:33.346 }, 00:19:33.346 "memory_domains": [ 00:19:33.346 { 00:19:33.346 "dma_device_id": "system", 00:19:33.346 "dma_device_type": 1 00:19:33.346 }, 00:19:33.346 { 00:19:33.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.346 "dma_device_type": 2 00:19:33.346 } 00:19:33.346 ], 00:19:33.346 "driver_specific": {} 00:19:33.346 }' 00:19:33.346 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.346 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.346 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:33.346 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.346 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.605 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:33.605 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.605 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.605 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:33.605 13:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.605 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.605 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:33.605 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:33.864 [2024-07-26 13:19:14.259423] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:33.864 [2024-07-26 13:19:14.259450] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.864 [2024-07-26 13:19:14.259496] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.864 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.122 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:34.122 "name": "Existed_Raid", 00:19:34.122 "uuid": "a70b81a2-25f6-42a8-a85e-b752e8ef311d", 00:19:34.122 "strip_size_kb": 64, 00:19:34.122 "state": "offline", 00:19:34.122 "raid_level": "concat", 00:19:34.122 "superblock": false, 00:19:34.122 "num_base_bdevs": 4, 00:19:34.122 "num_base_bdevs_discovered": 3, 00:19:34.122 "num_base_bdevs_operational": 3, 00:19:34.122 "base_bdevs_list": [ 00:19:34.122 { 00:19:34.122 "name": null, 00:19:34.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.122 "is_configured": false, 00:19:34.122 "data_offset": 0, 00:19:34.122 "data_size": 65536 00:19:34.122 }, 00:19:34.122 { 00:19:34.122 "name": "BaseBdev2", 00:19:34.122 "uuid": "99c216b3-8f08-4cb5-9b8f-39af69567556", 00:19:34.122 "is_configured": true, 00:19:34.122 "data_offset": 0, 00:19:34.122 "data_size": 65536 00:19:34.122 }, 00:19:34.122 { 00:19:34.122 "name": "BaseBdev3", 00:19:34.122 "uuid": "39f872f9-75cc-4188-9229-f642097b83c5", 00:19:34.122 "is_configured": true, 00:19:34.122 "data_offset": 0, 00:19:34.122 "data_size": 65536 00:19:34.122 }, 00:19:34.122 { 00:19:34.122 "name": "BaseBdev4", 00:19:34.122 "uuid": "c562ac0c-a96d-4e4a-9b34-41df595fcaa5", 00:19:34.122 "is_configured": true, 00:19:34.122 "data_offset": 0, 00:19:34.122 "data_size": 65536 00:19:34.122 } 00:19:34.122 ] 00:19:34.122 }' 00:19:34.122 13:19:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:34.123 13:19:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.690 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:34.690 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:34.690 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.690 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:34.949 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:34.949 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:34.949 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:35.208 [2024-07-26 13:19:15.499779] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:35.208 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:35.208 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:35.208 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.208 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:35.467 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:35.467 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:35.467 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:35.467 [2024-07-26 13:19:15.958966] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:35.467 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:35.467 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:35.467 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:35.467 13:19:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.726 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:35.726 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:35.726 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:35.985 [2024-07-26 13:19:16.442434] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:35.985 [2024-07-26 13:19:16.442475] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe7b840 name Existed_Raid, state offline 00:19:35.985 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:35.985 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:35.985 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.985 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:36.244 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:36.244 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:36.244 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:19:36.244 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:36.244 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:36.244 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:36.504 BaseBdev2 00:19:36.504 13:19:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:36.504 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:36.504 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:36.504 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:36.504 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:36.504 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:36.504 13:19:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:36.764 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:37.023 [ 00:19:37.023 { 00:19:37.023 "name": "BaseBdev2", 00:19:37.023 "aliases": [ 00:19:37.023 "1188f753-9c92-4944-9729-dc2afef86e68" 00:19:37.023 ], 00:19:37.023 "product_name": "Malloc disk", 00:19:37.023 "block_size": 512, 00:19:37.023 "num_blocks": 65536, 00:19:37.023 "uuid": "1188f753-9c92-4944-9729-dc2afef86e68", 00:19:37.023 "assigned_rate_limits": { 00:19:37.023 "rw_ios_per_sec": 0, 00:19:37.023 "rw_mbytes_per_sec": 0, 00:19:37.023 "r_mbytes_per_sec": 0, 00:19:37.023 "w_mbytes_per_sec": 0 00:19:37.023 }, 00:19:37.023 "claimed": false, 00:19:37.023 "zoned": false, 00:19:37.023 "supported_io_types": { 00:19:37.023 "read": true, 00:19:37.023 "write": true, 00:19:37.023 "unmap": true, 00:19:37.023 "flush": true, 00:19:37.023 "reset": true, 00:19:37.023 "nvme_admin": false, 00:19:37.023 "nvme_io": false, 00:19:37.023 "nvme_io_md": false, 00:19:37.023 "write_zeroes": true, 00:19:37.023 "zcopy": true, 00:19:37.023 "get_zone_info": false, 00:19:37.023 "zone_management": false, 00:19:37.023 "zone_append": false, 00:19:37.023 "compare": false, 00:19:37.023 "compare_and_write": false, 00:19:37.023 "abort": true, 00:19:37.023 "seek_hole": false, 00:19:37.023 "seek_data": false, 00:19:37.023 "copy": true, 00:19:37.023 "nvme_iov_md": false 00:19:37.023 }, 00:19:37.023 "memory_domains": [ 00:19:37.023 { 00:19:37.023 "dma_device_id": "system", 00:19:37.023 "dma_device_type": 1 00:19:37.023 }, 00:19:37.023 { 00:19:37.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.023 "dma_device_type": 2 00:19:37.023 } 00:19:37.023 ], 00:19:37.023 "driver_specific": {} 00:19:37.023 } 00:19:37.023 ] 00:19:37.023 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:37.023 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:37.023 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:37.023 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:37.283 BaseBdev3 00:19:37.283 13:19:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:37.283 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:37.283 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:37.283 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:37.283 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:37.283 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:37.283 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:37.283 13:19:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:37.542 [ 00:19:37.542 { 00:19:37.542 "name": "BaseBdev3", 00:19:37.542 "aliases": [ 00:19:37.542 "162bba40-4e22-4b8b-aaf3-c0865d29ac09" 00:19:37.542 ], 00:19:37.542 "product_name": "Malloc disk", 00:19:37.542 "block_size": 512, 00:19:37.542 "num_blocks": 65536, 00:19:37.542 "uuid": "162bba40-4e22-4b8b-aaf3-c0865d29ac09", 00:19:37.542 "assigned_rate_limits": { 00:19:37.542 "rw_ios_per_sec": 0, 00:19:37.542 "rw_mbytes_per_sec": 0, 00:19:37.542 "r_mbytes_per_sec": 0, 00:19:37.542 "w_mbytes_per_sec": 0 00:19:37.542 }, 00:19:37.542 "claimed": false, 00:19:37.542 "zoned": false, 00:19:37.542 "supported_io_types": { 00:19:37.542 "read": true, 00:19:37.542 "write": true, 00:19:37.542 "unmap": true, 00:19:37.542 "flush": true, 00:19:37.542 "reset": true, 00:19:37.542 "nvme_admin": false, 00:19:37.542 "nvme_io": false, 00:19:37.542 "nvme_io_md": false, 00:19:37.542 "write_zeroes": true, 00:19:37.542 "zcopy": true, 00:19:37.542 "get_zone_info": false, 00:19:37.542 "zone_management": false, 00:19:37.542 "zone_append": false, 00:19:37.542 "compare": false, 00:19:37.542 "compare_and_write": false, 00:19:37.542 "abort": true, 00:19:37.542 "seek_hole": false, 00:19:37.542 "seek_data": false, 00:19:37.542 "copy": true, 00:19:37.542 "nvme_iov_md": false 00:19:37.542 }, 00:19:37.542 "memory_domains": [ 00:19:37.542 { 00:19:37.542 "dma_device_id": "system", 00:19:37.542 "dma_device_type": 1 00:19:37.542 }, 00:19:37.542 { 00:19:37.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.542 "dma_device_type": 2 00:19:37.542 } 00:19:37.542 ], 00:19:37.542 "driver_specific": {} 00:19:37.542 } 00:19:37.542 ] 00:19:37.542 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:37.543 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:37.543 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:37.543 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:37.802 BaseBdev4 00:19:37.802 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:19:37.802 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:19:37.802 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:37.802 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:37.802 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:37.802 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:37.802 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:38.062 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:38.321 [ 00:19:38.321 { 00:19:38.321 "name": "BaseBdev4", 00:19:38.321 "aliases": [ 00:19:38.321 "4753ac0e-a49d-49f2-9063-260e572ea6db" 00:19:38.321 ], 00:19:38.321 "product_name": "Malloc disk", 00:19:38.321 "block_size": 512, 00:19:38.321 "num_blocks": 65536, 00:19:38.321 "uuid": "4753ac0e-a49d-49f2-9063-260e572ea6db", 00:19:38.321 "assigned_rate_limits": { 00:19:38.321 "rw_ios_per_sec": 0, 00:19:38.321 "rw_mbytes_per_sec": 0, 00:19:38.321 "r_mbytes_per_sec": 0, 00:19:38.321 "w_mbytes_per_sec": 0 00:19:38.321 }, 00:19:38.321 "claimed": false, 00:19:38.321 "zoned": false, 00:19:38.321 "supported_io_types": { 00:19:38.321 "read": true, 00:19:38.321 "write": true, 00:19:38.321 "unmap": true, 00:19:38.321 "flush": true, 00:19:38.321 "reset": true, 00:19:38.321 "nvme_admin": false, 00:19:38.321 "nvme_io": false, 00:19:38.321 "nvme_io_md": false, 00:19:38.321 "write_zeroes": true, 00:19:38.321 "zcopy": true, 00:19:38.321 "get_zone_info": false, 00:19:38.321 "zone_management": false, 00:19:38.321 "zone_append": false, 00:19:38.321 "compare": false, 00:19:38.321 "compare_and_write": false, 00:19:38.321 "abort": true, 00:19:38.321 "seek_hole": false, 00:19:38.321 "seek_data": false, 00:19:38.321 "copy": true, 00:19:38.321 "nvme_iov_md": false 00:19:38.321 }, 00:19:38.321 "memory_domains": [ 00:19:38.321 { 00:19:38.321 "dma_device_id": "system", 00:19:38.321 "dma_device_type": 1 00:19:38.321 }, 00:19:38.321 { 00:19:38.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.321 "dma_device_type": 2 00:19:38.321 } 00:19:38.321 ], 00:19:38.321 "driver_specific": {} 00:19:38.321 } 00:19:38.321 ] 00:19:38.321 13:19:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:38.321 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:38.321 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:38.321 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:38.581 [2024-07-26 13:19:18.903838] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:38.581 [2024-07-26 13:19:18.903877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:38.581 [2024-07-26 13:19:18.903895] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:38.581 [2024-07-26 13:19:18.905112] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:38.581 [2024-07-26 13:19:18.905160] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:38.581 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:38.581 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:38.581 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:38.581 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:38.581 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:38.581 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:38.581 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.581 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.581 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.581 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.581 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.581 13:19:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.840 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.840 "name": "Existed_Raid", 00:19:38.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.840 "strip_size_kb": 64, 00:19:38.840 "state": "configuring", 00:19:38.840 "raid_level": "concat", 00:19:38.840 "superblock": false, 00:19:38.840 "num_base_bdevs": 4, 00:19:38.840 "num_base_bdevs_discovered": 3, 00:19:38.840 "num_base_bdevs_operational": 4, 00:19:38.840 "base_bdevs_list": [ 00:19:38.840 { 00:19:38.840 "name": "BaseBdev1", 00:19:38.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.840 "is_configured": false, 00:19:38.840 "data_offset": 0, 00:19:38.840 "data_size": 0 00:19:38.840 }, 00:19:38.840 { 00:19:38.840 "name": "BaseBdev2", 00:19:38.840 "uuid": "1188f753-9c92-4944-9729-dc2afef86e68", 00:19:38.840 "is_configured": true, 00:19:38.840 "data_offset": 0, 00:19:38.840 "data_size": 65536 00:19:38.840 }, 00:19:38.840 { 00:19:38.840 "name": "BaseBdev3", 00:19:38.840 "uuid": "162bba40-4e22-4b8b-aaf3-c0865d29ac09", 00:19:38.840 "is_configured": true, 00:19:38.840 "data_offset": 0, 00:19:38.840 "data_size": 65536 00:19:38.840 }, 00:19:38.841 { 00:19:38.841 "name": "BaseBdev4", 00:19:38.841 "uuid": "4753ac0e-a49d-49f2-9063-260e572ea6db", 00:19:38.841 "is_configured": true, 00:19:38.841 "data_offset": 0, 00:19:38.841 "data_size": 65536 00:19:38.841 } 00:19:38.841 ] 00:19:38.841 }' 00:19:38.841 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.841 13:19:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.409 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:39.409 [2024-07-26 13:19:19.934528] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:39.668 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:39.668 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:39.668 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:39.668 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:39.668 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:39.668 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:39.668 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.668 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.668 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.668 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.668 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.668 13:19:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.668 13:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.668 "name": "Existed_Raid", 00:19:39.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.668 "strip_size_kb": 64, 00:19:39.668 "state": "configuring", 00:19:39.669 "raid_level": "concat", 00:19:39.669 "superblock": false, 00:19:39.669 "num_base_bdevs": 4, 00:19:39.669 "num_base_bdevs_discovered": 2, 00:19:39.669 "num_base_bdevs_operational": 4, 00:19:39.669 "base_bdevs_list": [ 00:19:39.669 { 00:19:39.669 "name": "BaseBdev1", 00:19:39.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.669 "is_configured": false, 00:19:39.669 "data_offset": 0, 00:19:39.669 "data_size": 0 00:19:39.669 }, 00:19:39.669 { 00:19:39.669 "name": null, 00:19:39.669 "uuid": "1188f753-9c92-4944-9729-dc2afef86e68", 00:19:39.669 "is_configured": false, 00:19:39.669 "data_offset": 0, 00:19:39.669 "data_size": 65536 00:19:39.669 }, 00:19:39.669 { 00:19:39.669 "name": "BaseBdev3", 00:19:39.669 "uuid": "162bba40-4e22-4b8b-aaf3-c0865d29ac09", 00:19:39.669 "is_configured": true, 00:19:39.669 "data_offset": 0, 00:19:39.669 "data_size": 65536 00:19:39.669 }, 00:19:39.669 { 00:19:39.669 "name": "BaseBdev4", 00:19:39.669 "uuid": "4753ac0e-a49d-49f2-9063-260e572ea6db", 00:19:39.669 "is_configured": true, 00:19:39.669 "data_offset": 0, 00:19:39.669 "data_size": 65536 00:19:39.669 } 00:19:39.669 ] 00:19:39.669 }' 00:19:39.669 13:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.669 13:19:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.264 13:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.264 13:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:40.553 13:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:40.553 13:19:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:40.812 [2024-07-26 13:19:21.205158] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:40.812 BaseBdev1 00:19:40.812 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:40.812 13:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:40.812 13:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:40.812 13:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:40.812 13:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:40.812 13:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:40.813 13:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:41.072 13:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:41.332 [ 00:19:41.332 { 00:19:41.332 "name": "BaseBdev1", 00:19:41.332 "aliases": [ 00:19:41.332 "10b5ca41-9563-446e-a739-a2532916431b" 00:19:41.332 ], 00:19:41.332 "product_name": "Malloc disk", 00:19:41.332 "block_size": 512, 00:19:41.332 "num_blocks": 65536, 00:19:41.332 "uuid": "10b5ca41-9563-446e-a739-a2532916431b", 00:19:41.332 "assigned_rate_limits": { 00:19:41.332 "rw_ios_per_sec": 0, 00:19:41.332 "rw_mbytes_per_sec": 0, 00:19:41.332 "r_mbytes_per_sec": 0, 00:19:41.332 "w_mbytes_per_sec": 0 00:19:41.332 }, 00:19:41.332 "claimed": true, 00:19:41.332 "claim_type": "exclusive_write", 00:19:41.332 "zoned": false, 00:19:41.332 "supported_io_types": { 00:19:41.332 "read": true, 00:19:41.332 "write": true, 00:19:41.332 "unmap": true, 00:19:41.332 "flush": true, 00:19:41.332 "reset": true, 00:19:41.332 "nvme_admin": false, 00:19:41.332 "nvme_io": false, 00:19:41.332 "nvme_io_md": false, 00:19:41.332 "write_zeroes": true, 00:19:41.332 "zcopy": true, 00:19:41.332 "get_zone_info": false, 00:19:41.332 "zone_management": false, 00:19:41.332 "zone_append": false, 00:19:41.332 "compare": false, 00:19:41.332 "compare_and_write": false, 00:19:41.332 "abort": true, 00:19:41.332 "seek_hole": false, 00:19:41.332 "seek_data": false, 00:19:41.332 "copy": true, 00:19:41.332 "nvme_iov_md": false 00:19:41.332 }, 00:19:41.332 "memory_domains": [ 00:19:41.332 { 00:19:41.332 "dma_device_id": "system", 00:19:41.332 "dma_device_type": 1 00:19:41.332 }, 00:19:41.332 { 00:19:41.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.332 "dma_device_type": 2 00:19:41.332 } 00:19:41.332 ], 00:19:41.332 "driver_specific": {} 00:19:41.332 } 00:19:41.332 ] 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.332 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.591 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:41.591 "name": "Existed_Raid", 00:19:41.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.591 "strip_size_kb": 64, 00:19:41.591 "state": "configuring", 00:19:41.591 "raid_level": "concat", 00:19:41.591 "superblock": false, 00:19:41.591 "num_base_bdevs": 4, 00:19:41.591 "num_base_bdevs_discovered": 3, 00:19:41.591 "num_base_bdevs_operational": 4, 00:19:41.592 "base_bdevs_list": [ 00:19:41.592 { 00:19:41.592 "name": "BaseBdev1", 00:19:41.592 "uuid": "10b5ca41-9563-446e-a739-a2532916431b", 00:19:41.592 "is_configured": true, 00:19:41.592 "data_offset": 0, 00:19:41.592 "data_size": 65536 00:19:41.592 }, 00:19:41.592 { 00:19:41.592 "name": null, 00:19:41.592 "uuid": "1188f753-9c92-4944-9729-dc2afef86e68", 00:19:41.592 "is_configured": false, 00:19:41.592 "data_offset": 0, 00:19:41.592 "data_size": 65536 00:19:41.592 }, 00:19:41.592 { 00:19:41.592 "name": "BaseBdev3", 00:19:41.592 "uuid": "162bba40-4e22-4b8b-aaf3-c0865d29ac09", 00:19:41.592 "is_configured": true, 00:19:41.592 "data_offset": 0, 00:19:41.592 "data_size": 65536 00:19:41.592 }, 00:19:41.592 { 00:19:41.592 "name": "BaseBdev4", 00:19:41.592 "uuid": "4753ac0e-a49d-49f2-9063-260e572ea6db", 00:19:41.592 "is_configured": true, 00:19:41.592 "data_offset": 0, 00:19:41.592 "data_size": 65536 00:19:41.592 } 00:19:41.592 ] 00:19:41.592 }' 00:19:41.592 13:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:41.592 13:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.162 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.162 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:42.420 [2024-07-26 13:19:22.913701] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.420 13:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.680 13:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:42.680 "name": "Existed_Raid", 00:19:42.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.680 "strip_size_kb": 64, 00:19:42.680 "state": "configuring", 00:19:42.680 "raid_level": "concat", 00:19:42.680 "superblock": false, 00:19:42.680 "num_base_bdevs": 4, 00:19:42.680 "num_base_bdevs_discovered": 2, 00:19:42.680 "num_base_bdevs_operational": 4, 00:19:42.680 "base_bdevs_list": [ 00:19:42.680 { 00:19:42.680 "name": "BaseBdev1", 00:19:42.680 "uuid": "10b5ca41-9563-446e-a739-a2532916431b", 00:19:42.680 "is_configured": true, 00:19:42.680 "data_offset": 0, 00:19:42.680 "data_size": 65536 00:19:42.680 }, 00:19:42.680 { 00:19:42.680 "name": null, 00:19:42.680 "uuid": "1188f753-9c92-4944-9729-dc2afef86e68", 00:19:42.680 "is_configured": false, 00:19:42.680 "data_offset": 0, 00:19:42.680 "data_size": 65536 00:19:42.680 }, 00:19:42.680 { 00:19:42.680 "name": null, 00:19:42.680 "uuid": "162bba40-4e22-4b8b-aaf3-c0865d29ac09", 00:19:42.680 "is_configured": false, 00:19:42.680 "data_offset": 0, 00:19:42.680 "data_size": 65536 00:19:42.680 }, 00:19:42.680 { 00:19:42.680 "name": "BaseBdev4", 00:19:42.680 "uuid": "4753ac0e-a49d-49f2-9063-260e572ea6db", 00:19:42.680 "is_configured": true, 00:19:42.680 "data_offset": 0, 00:19:42.680 "data_size": 65536 00:19:42.680 } 00:19:42.680 ] 00:19:42.680 }' 00:19:42.680 13:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:42.680 13:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.248 13:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.248 13:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:43.508 13:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:43.508 13:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:43.767 [2024-07-26 13:19:24.156990] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:43.767 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:43.767 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:43.767 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:43.767 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:43.767 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:43.767 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:43.767 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.767 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.767 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.767 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.767 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.767 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.026 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.026 "name": "Existed_Raid", 00:19:44.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.026 "strip_size_kb": 64, 00:19:44.026 "state": "configuring", 00:19:44.026 "raid_level": "concat", 00:19:44.026 "superblock": false, 00:19:44.026 "num_base_bdevs": 4, 00:19:44.026 "num_base_bdevs_discovered": 3, 00:19:44.026 "num_base_bdevs_operational": 4, 00:19:44.026 "base_bdevs_list": [ 00:19:44.026 { 00:19:44.026 "name": "BaseBdev1", 00:19:44.026 "uuid": "10b5ca41-9563-446e-a739-a2532916431b", 00:19:44.026 "is_configured": true, 00:19:44.026 "data_offset": 0, 00:19:44.026 "data_size": 65536 00:19:44.026 }, 00:19:44.026 { 00:19:44.026 "name": null, 00:19:44.026 "uuid": "1188f753-9c92-4944-9729-dc2afef86e68", 00:19:44.026 "is_configured": false, 00:19:44.026 "data_offset": 0, 00:19:44.026 "data_size": 65536 00:19:44.026 }, 00:19:44.026 { 00:19:44.026 "name": "BaseBdev3", 00:19:44.026 "uuid": "162bba40-4e22-4b8b-aaf3-c0865d29ac09", 00:19:44.026 "is_configured": true, 00:19:44.026 "data_offset": 0, 00:19:44.026 "data_size": 65536 00:19:44.026 }, 00:19:44.026 { 00:19:44.026 "name": "BaseBdev4", 00:19:44.026 "uuid": "4753ac0e-a49d-49f2-9063-260e572ea6db", 00:19:44.026 "is_configured": true, 00:19:44.026 "data_offset": 0, 00:19:44.026 "data_size": 65536 00:19:44.026 } 00:19:44.026 ] 00:19:44.026 }' 00:19:44.026 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.026 13:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.595 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.595 13:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:44.854 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:44.854 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:45.113 [2024-07-26 13:19:25.424337] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.113 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:45.113 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:45.113 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:45.113 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:45.113 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:45.113 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:45.113 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.113 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.113 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.114 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.114 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.114 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.373 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.373 "name": "Existed_Raid", 00:19:45.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.373 "strip_size_kb": 64, 00:19:45.373 "state": "configuring", 00:19:45.373 "raid_level": "concat", 00:19:45.373 "superblock": false, 00:19:45.373 "num_base_bdevs": 4, 00:19:45.373 "num_base_bdevs_discovered": 2, 00:19:45.373 "num_base_bdevs_operational": 4, 00:19:45.373 "base_bdevs_list": [ 00:19:45.373 { 00:19:45.373 "name": null, 00:19:45.373 "uuid": "10b5ca41-9563-446e-a739-a2532916431b", 00:19:45.373 "is_configured": false, 00:19:45.373 "data_offset": 0, 00:19:45.373 "data_size": 65536 00:19:45.373 }, 00:19:45.373 { 00:19:45.373 "name": null, 00:19:45.373 "uuid": "1188f753-9c92-4944-9729-dc2afef86e68", 00:19:45.373 "is_configured": false, 00:19:45.373 "data_offset": 0, 00:19:45.373 "data_size": 65536 00:19:45.373 }, 00:19:45.373 { 00:19:45.373 "name": "BaseBdev3", 00:19:45.373 "uuid": "162bba40-4e22-4b8b-aaf3-c0865d29ac09", 00:19:45.373 "is_configured": true, 00:19:45.373 "data_offset": 0, 00:19:45.373 "data_size": 65536 00:19:45.373 }, 00:19:45.373 { 00:19:45.373 "name": "BaseBdev4", 00:19:45.373 "uuid": "4753ac0e-a49d-49f2-9063-260e572ea6db", 00:19:45.373 "is_configured": true, 00:19:45.373 "data_offset": 0, 00:19:45.373 "data_size": 65536 00:19:45.373 } 00:19:45.373 ] 00:19:45.373 }' 00:19:45.373 13:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.373 13:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.940 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.940 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:45.940 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:45.940 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:46.200 [2024-07-26 13:19:26.669779] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.200 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:46.200 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:46.200 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:46.200 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:46.200 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:46.200 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:46.200 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.200 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.200 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.200 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.200 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.200 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.458 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.458 "name": "Existed_Raid", 00:19:46.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.458 "strip_size_kb": 64, 00:19:46.458 "state": "configuring", 00:19:46.458 "raid_level": "concat", 00:19:46.458 "superblock": false, 00:19:46.458 "num_base_bdevs": 4, 00:19:46.458 "num_base_bdevs_discovered": 3, 00:19:46.458 "num_base_bdevs_operational": 4, 00:19:46.458 "base_bdevs_list": [ 00:19:46.458 { 00:19:46.458 "name": null, 00:19:46.458 "uuid": "10b5ca41-9563-446e-a739-a2532916431b", 00:19:46.458 "is_configured": false, 00:19:46.458 "data_offset": 0, 00:19:46.458 "data_size": 65536 00:19:46.458 }, 00:19:46.458 { 00:19:46.458 "name": "BaseBdev2", 00:19:46.458 "uuid": "1188f753-9c92-4944-9729-dc2afef86e68", 00:19:46.458 "is_configured": true, 00:19:46.458 "data_offset": 0, 00:19:46.458 "data_size": 65536 00:19:46.458 }, 00:19:46.458 { 00:19:46.458 "name": "BaseBdev3", 00:19:46.458 "uuid": "162bba40-4e22-4b8b-aaf3-c0865d29ac09", 00:19:46.458 "is_configured": true, 00:19:46.458 "data_offset": 0, 00:19:46.458 "data_size": 65536 00:19:46.458 }, 00:19:46.458 { 00:19:46.458 "name": "BaseBdev4", 00:19:46.458 "uuid": "4753ac0e-a49d-49f2-9063-260e572ea6db", 00:19:46.458 "is_configured": true, 00:19:46.458 "data_offset": 0, 00:19:46.458 "data_size": 65536 00:19:46.458 } 00:19:46.458 ] 00:19:46.458 }' 00:19:46.458 13:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.459 13:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.025 13:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.025 13:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:47.284 13:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:47.284 13:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.284 13:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:47.543 13:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 10b5ca41-9563-446e-a739-a2532916431b 00:19:47.802 [2024-07-26 13:19:28.140757] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:47.802 [2024-07-26 13:19:28.140790] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xe7bad0 00:19:47.802 [2024-07-26 13:19:28.140798] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:47.802 [2024-07-26 13:19:28.140974] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10250d0 00:19:47.802 [2024-07-26 13:19:28.141081] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe7bad0 00:19:47.802 [2024-07-26 13:19:28.141090] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xe7bad0 00:19:47.802 [2024-07-26 13:19:28.141243] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.802 NewBaseBdev 00:19:47.802 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:47.802 13:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:47.802 13:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:47.802 13:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:47.802 13:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:47.802 13:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:47.802 13:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:48.062 13:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:48.062 [ 00:19:48.062 { 00:19:48.062 "name": "NewBaseBdev", 00:19:48.062 "aliases": [ 00:19:48.062 "10b5ca41-9563-446e-a739-a2532916431b" 00:19:48.062 ], 00:19:48.062 "product_name": "Malloc disk", 00:19:48.062 "block_size": 512, 00:19:48.062 "num_blocks": 65536, 00:19:48.062 "uuid": "10b5ca41-9563-446e-a739-a2532916431b", 00:19:48.062 "assigned_rate_limits": { 00:19:48.062 "rw_ios_per_sec": 0, 00:19:48.062 "rw_mbytes_per_sec": 0, 00:19:48.062 "r_mbytes_per_sec": 0, 00:19:48.062 "w_mbytes_per_sec": 0 00:19:48.062 }, 00:19:48.062 "claimed": true, 00:19:48.062 "claim_type": "exclusive_write", 00:19:48.062 "zoned": false, 00:19:48.062 "supported_io_types": { 00:19:48.062 "read": true, 00:19:48.062 "write": true, 00:19:48.062 "unmap": true, 00:19:48.062 "flush": true, 00:19:48.062 "reset": true, 00:19:48.062 "nvme_admin": false, 00:19:48.062 "nvme_io": false, 00:19:48.062 "nvme_io_md": false, 00:19:48.062 "write_zeroes": true, 00:19:48.062 "zcopy": true, 00:19:48.062 "get_zone_info": false, 00:19:48.062 "zone_management": false, 00:19:48.062 "zone_append": false, 00:19:48.062 "compare": false, 00:19:48.062 "compare_and_write": false, 00:19:48.062 "abort": true, 00:19:48.062 "seek_hole": false, 00:19:48.062 "seek_data": false, 00:19:48.062 "copy": true, 00:19:48.062 "nvme_iov_md": false 00:19:48.062 }, 00:19:48.062 "memory_domains": [ 00:19:48.062 { 00:19:48.062 "dma_device_id": "system", 00:19:48.062 "dma_device_type": 1 00:19:48.062 }, 00:19:48.062 { 00:19:48.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.062 "dma_device_type": 2 00:19:48.062 } 00:19:48.062 ], 00:19:48.062 "driver_specific": {} 00:19:48.062 } 00:19:48.062 ] 00:19:48.062 13:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:48.062 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:48.062 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:48.062 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:48.062 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:48.062 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:48.062 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:48.062 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.062 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.062 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.062 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.322 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.322 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.322 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.322 "name": "Existed_Raid", 00:19:48.322 "uuid": "7803c172-ef92-40d3-9905-8ecf4c401970", 00:19:48.322 "strip_size_kb": 64, 00:19:48.322 "state": "online", 00:19:48.322 "raid_level": "concat", 00:19:48.322 "superblock": false, 00:19:48.322 "num_base_bdevs": 4, 00:19:48.322 "num_base_bdevs_discovered": 4, 00:19:48.322 "num_base_bdevs_operational": 4, 00:19:48.322 "base_bdevs_list": [ 00:19:48.322 { 00:19:48.322 "name": "NewBaseBdev", 00:19:48.322 "uuid": "10b5ca41-9563-446e-a739-a2532916431b", 00:19:48.322 "is_configured": true, 00:19:48.322 "data_offset": 0, 00:19:48.322 "data_size": 65536 00:19:48.322 }, 00:19:48.322 { 00:19:48.322 "name": "BaseBdev2", 00:19:48.322 "uuid": "1188f753-9c92-4944-9729-dc2afef86e68", 00:19:48.322 "is_configured": true, 00:19:48.322 "data_offset": 0, 00:19:48.322 "data_size": 65536 00:19:48.322 }, 00:19:48.322 { 00:19:48.322 "name": "BaseBdev3", 00:19:48.322 "uuid": "162bba40-4e22-4b8b-aaf3-c0865d29ac09", 00:19:48.322 "is_configured": true, 00:19:48.322 "data_offset": 0, 00:19:48.322 "data_size": 65536 00:19:48.322 }, 00:19:48.322 { 00:19:48.322 "name": "BaseBdev4", 00:19:48.322 "uuid": "4753ac0e-a49d-49f2-9063-260e572ea6db", 00:19:48.322 "is_configured": true, 00:19:48.322 "data_offset": 0, 00:19:48.322 "data_size": 65536 00:19:48.322 } 00:19:48.322 ] 00:19:48.322 }' 00:19:48.322 13:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.322 13:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.890 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:48.890 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:48.890 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:48.890 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:48.890 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:48.890 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:48.890 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:48.890 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:49.149 [2024-07-26 13:19:29.616967] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.149 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:49.149 "name": "Existed_Raid", 00:19:49.149 "aliases": [ 00:19:49.149 "7803c172-ef92-40d3-9905-8ecf4c401970" 00:19:49.149 ], 00:19:49.149 "product_name": "Raid Volume", 00:19:49.149 "block_size": 512, 00:19:49.149 "num_blocks": 262144, 00:19:49.149 "uuid": "7803c172-ef92-40d3-9905-8ecf4c401970", 00:19:49.149 "assigned_rate_limits": { 00:19:49.149 "rw_ios_per_sec": 0, 00:19:49.149 "rw_mbytes_per_sec": 0, 00:19:49.149 "r_mbytes_per_sec": 0, 00:19:49.149 "w_mbytes_per_sec": 0 00:19:49.149 }, 00:19:49.149 "claimed": false, 00:19:49.149 "zoned": false, 00:19:49.149 "supported_io_types": { 00:19:49.149 "read": true, 00:19:49.149 "write": true, 00:19:49.149 "unmap": true, 00:19:49.149 "flush": true, 00:19:49.149 "reset": true, 00:19:49.149 "nvme_admin": false, 00:19:49.149 "nvme_io": false, 00:19:49.149 "nvme_io_md": false, 00:19:49.149 "write_zeroes": true, 00:19:49.149 "zcopy": false, 00:19:49.149 "get_zone_info": false, 00:19:49.149 "zone_management": false, 00:19:49.149 "zone_append": false, 00:19:49.149 "compare": false, 00:19:49.149 "compare_and_write": false, 00:19:49.149 "abort": false, 00:19:49.149 "seek_hole": false, 00:19:49.149 "seek_data": false, 00:19:49.149 "copy": false, 00:19:49.149 "nvme_iov_md": false 00:19:49.149 }, 00:19:49.149 "memory_domains": [ 00:19:49.149 { 00:19:49.149 "dma_device_id": "system", 00:19:49.149 "dma_device_type": 1 00:19:49.149 }, 00:19:49.149 { 00:19:49.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.149 "dma_device_type": 2 00:19:49.149 }, 00:19:49.149 { 00:19:49.149 "dma_device_id": "system", 00:19:49.149 "dma_device_type": 1 00:19:49.149 }, 00:19:49.149 { 00:19:49.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.149 "dma_device_type": 2 00:19:49.149 }, 00:19:49.149 { 00:19:49.149 "dma_device_id": "system", 00:19:49.149 "dma_device_type": 1 00:19:49.149 }, 00:19:49.149 { 00:19:49.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.149 "dma_device_type": 2 00:19:49.149 }, 00:19:49.149 { 00:19:49.149 "dma_device_id": "system", 00:19:49.149 "dma_device_type": 1 00:19:49.149 }, 00:19:49.149 { 00:19:49.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.149 "dma_device_type": 2 00:19:49.149 } 00:19:49.149 ], 00:19:49.149 "driver_specific": { 00:19:49.149 "raid": { 00:19:49.149 "uuid": "7803c172-ef92-40d3-9905-8ecf4c401970", 00:19:49.149 "strip_size_kb": 64, 00:19:49.149 "state": "online", 00:19:49.149 "raid_level": "concat", 00:19:49.149 "superblock": false, 00:19:49.149 "num_base_bdevs": 4, 00:19:49.149 "num_base_bdevs_discovered": 4, 00:19:49.149 "num_base_bdevs_operational": 4, 00:19:49.149 "base_bdevs_list": [ 00:19:49.149 { 00:19:49.149 "name": "NewBaseBdev", 00:19:49.149 "uuid": "10b5ca41-9563-446e-a739-a2532916431b", 00:19:49.149 "is_configured": true, 00:19:49.149 "data_offset": 0, 00:19:49.149 "data_size": 65536 00:19:49.149 }, 00:19:49.149 { 00:19:49.149 "name": "BaseBdev2", 00:19:49.149 "uuid": "1188f753-9c92-4944-9729-dc2afef86e68", 00:19:49.149 "is_configured": true, 00:19:49.149 "data_offset": 0, 00:19:49.149 "data_size": 65536 00:19:49.149 }, 00:19:49.149 { 00:19:49.149 "name": "BaseBdev3", 00:19:49.149 "uuid": "162bba40-4e22-4b8b-aaf3-c0865d29ac09", 00:19:49.149 "is_configured": true, 00:19:49.149 "data_offset": 0, 00:19:49.149 "data_size": 65536 00:19:49.149 }, 00:19:49.149 { 00:19:49.149 "name": "BaseBdev4", 00:19:49.149 "uuid": "4753ac0e-a49d-49f2-9063-260e572ea6db", 00:19:49.149 "is_configured": true, 00:19:49.149 "data_offset": 0, 00:19:49.149 "data_size": 65536 00:19:49.149 } 00:19:49.149 ] 00:19:49.149 } 00:19:49.149 } 00:19:49.149 }' 00:19:49.149 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:49.409 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:49.409 BaseBdev2 00:19:49.409 BaseBdev3 00:19:49.409 BaseBdev4' 00:19:49.409 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:49.409 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:49.409 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:49.409 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:49.409 "name": "NewBaseBdev", 00:19:49.409 "aliases": [ 00:19:49.409 "10b5ca41-9563-446e-a739-a2532916431b" 00:19:49.409 ], 00:19:49.409 "product_name": "Malloc disk", 00:19:49.409 "block_size": 512, 00:19:49.409 "num_blocks": 65536, 00:19:49.409 "uuid": "10b5ca41-9563-446e-a739-a2532916431b", 00:19:49.409 "assigned_rate_limits": { 00:19:49.409 "rw_ios_per_sec": 0, 00:19:49.409 "rw_mbytes_per_sec": 0, 00:19:49.409 "r_mbytes_per_sec": 0, 00:19:49.409 "w_mbytes_per_sec": 0 00:19:49.409 }, 00:19:49.409 "claimed": true, 00:19:49.409 "claim_type": "exclusive_write", 00:19:49.409 "zoned": false, 00:19:49.409 "supported_io_types": { 00:19:49.409 "read": true, 00:19:49.409 "write": true, 00:19:49.409 "unmap": true, 00:19:49.409 "flush": true, 00:19:49.409 "reset": true, 00:19:49.409 "nvme_admin": false, 00:19:49.409 "nvme_io": false, 00:19:49.409 "nvme_io_md": false, 00:19:49.409 "write_zeroes": true, 00:19:49.409 "zcopy": true, 00:19:49.409 "get_zone_info": false, 00:19:49.409 "zone_management": false, 00:19:49.409 "zone_append": false, 00:19:49.409 "compare": false, 00:19:49.409 "compare_and_write": false, 00:19:49.409 "abort": true, 00:19:49.409 "seek_hole": false, 00:19:49.409 "seek_data": false, 00:19:49.409 "copy": true, 00:19:49.409 "nvme_iov_md": false 00:19:49.409 }, 00:19:49.409 "memory_domains": [ 00:19:49.409 { 00:19:49.409 "dma_device_id": "system", 00:19:49.409 "dma_device_type": 1 00:19:49.409 }, 00:19:49.409 { 00:19:49.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.409 "dma_device_type": 2 00:19:49.409 } 00:19:49.409 ], 00:19:49.409 "driver_specific": {} 00:19:49.409 }' 00:19:49.409 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:49.668 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:49.668 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:49.668 13:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:49.668 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:49.668 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:49.668 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:49.668 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:49.668 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:49.668 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:49.668 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:49.927 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:49.927 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:49.927 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:49.927 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:49.927 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:49.927 "name": "BaseBdev2", 00:19:49.927 "aliases": [ 00:19:49.927 "1188f753-9c92-4944-9729-dc2afef86e68" 00:19:49.927 ], 00:19:49.927 "product_name": "Malloc disk", 00:19:49.927 "block_size": 512, 00:19:49.927 "num_blocks": 65536, 00:19:49.927 "uuid": "1188f753-9c92-4944-9729-dc2afef86e68", 00:19:49.927 "assigned_rate_limits": { 00:19:49.927 "rw_ios_per_sec": 0, 00:19:49.927 "rw_mbytes_per_sec": 0, 00:19:49.927 "r_mbytes_per_sec": 0, 00:19:49.927 "w_mbytes_per_sec": 0 00:19:49.927 }, 00:19:49.927 "claimed": true, 00:19:49.927 "claim_type": "exclusive_write", 00:19:49.927 "zoned": false, 00:19:49.927 "supported_io_types": { 00:19:49.927 "read": true, 00:19:49.927 "write": true, 00:19:49.927 "unmap": true, 00:19:49.927 "flush": true, 00:19:49.927 "reset": true, 00:19:49.927 "nvme_admin": false, 00:19:49.927 "nvme_io": false, 00:19:49.927 "nvme_io_md": false, 00:19:49.927 "write_zeroes": true, 00:19:49.927 "zcopy": true, 00:19:49.927 "get_zone_info": false, 00:19:49.927 "zone_management": false, 00:19:49.927 "zone_append": false, 00:19:49.927 "compare": false, 00:19:49.927 "compare_and_write": false, 00:19:49.927 "abort": true, 00:19:49.927 "seek_hole": false, 00:19:49.927 "seek_data": false, 00:19:49.927 "copy": true, 00:19:49.927 "nvme_iov_md": false 00:19:49.927 }, 00:19:49.927 "memory_domains": [ 00:19:49.927 { 00:19:49.927 "dma_device_id": "system", 00:19:49.927 "dma_device_type": 1 00:19:49.927 }, 00:19:49.927 { 00:19:49.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.927 "dma_device_type": 2 00:19:49.927 } 00:19:49.927 ], 00:19:49.927 "driver_specific": {} 00:19:49.927 }' 00:19:49.927 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.185 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.186 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:50.186 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.186 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.186 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:50.186 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.186 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.186 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:50.186 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.445 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.445 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:50.445 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:50.445 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:50.445 13:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:50.704 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:50.704 "name": "BaseBdev3", 00:19:50.704 "aliases": [ 00:19:50.704 "162bba40-4e22-4b8b-aaf3-c0865d29ac09" 00:19:50.704 ], 00:19:50.704 "product_name": "Malloc disk", 00:19:50.704 "block_size": 512, 00:19:50.704 "num_blocks": 65536, 00:19:50.704 "uuid": "162bba40-4e22-4b8b-aaf3-c0865d29ac09", 00:19:50.704 "assigned_rate_limits": { 00:19:50.704 "rw_ios_per_sec": 0, 00:19:50.704 "rw_mbytes_per_sec": 0, 00:19:50.704 "r_mbytes_per_sec": 0, 00:19:50.704 "w_mbytes_per_sec": 0 00:19:50.704 }, 00:19:50.704 "claimed": true, 00:19:50.704 "claim_type": "exclusive_write", 00:19:50.704 "zoned": false, 00:19:50.704 "supported_io_types": { 00:19:50.704 "read": true, 00:19:50.704 "write": true, 00:19:50.704 "unmap": true, 00:19:50.704 "flush": true, 00:19:50.704 "reset": true, 00:19:50.704 "nvme_admin": false, 00:19:50.704 "nvme_io": false, 00:19:50.704 "nvme_io_md": false, 00:19:50.704 "write_zeroes": true, 00:19:50.704 "zcopy": true, 00:19:50.704 "get_zone_info": false, 00:19:50.704 "zone_management": false, 00:19:50.704 "zone_append": false, 00:19:50.704 "compare": false, 00:19:50.704 "compare_and_write": false, 00:19:50.704 "abort": true, 00:19:50.704 "seek_hole": false, 00:19:50.704 "seek_data": false, 00:19:50.704 "copy": true, 00:19:50.704 "nvme_iov_md": false 00:19:50.704 }, 00:19:50.704 "memory_domains": [ 00:19:50.704 { 00:19:50.704 "dma_device_id": "system", 00:19:50.704 "dma_device_type": 1 00:19:50.704 }, 00:19:50.704 { 00:19:50.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.704 "dma_device_type": 2 00:19:50.704 } 00:19:50.704 ], 00:19:50.704 "driver_specific": {} 00:19:50.704 }' 00:19:50.704 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.704 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.704 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:50.704 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.704 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.704 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:50.704 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.704 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.962 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:50.962 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.962 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.962 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:50.962 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:50.962 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:19:50.962 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:51.221 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:51.221 "name": "BaseBdev4", 00:19:51.221 "aliases": [ 00:19:51.221 "4753ac0e-a49d-49f2-9063-260e572ea6db" 00:19:51.221 ], 00:19:51.221 "product_name": "Malloc disk", 00:19:51.221 "block_size": 512, 00:19:51.221 "num_blocks": 65536, 00:19:51.221 "uuid": "4753ac0e-a49d-49f2-9063-260e572ea6db", 00:19:51.221 "assigned_rate_limits": { 00:19:51.221 "rw_ios_per_sec": 0, 00:19:51.221 "rw_mbytes_per_sec": 0, 00:19:51.221 "r_mbytes_per_sec": 0, 00:19:51.221 "w_mbytes_per_sec": 0 00:19:51.221 }, 00:19:51.221 "claimed": true, 00:19:51.221 "claim_type": "exclusive_write", 00:19:51.221 "zoned": false, 00:19:51.221 "supported_io_types": { 00:19:51.221 "read": true, 00:19:51.221 "write": true, 00:19:51.221 "unmap": true, 00:19:51.221 "flush": true, 00:19:51.221 "reset": true, 00:19:51.221 "nvme_admin": false, 00:19:51.221 "nvme_io": false, 00:19:51.221 "nvme_io_md": false, 00:19:51.221 "write_zeroes": true, 00:19:51.221 "zcopy": true, 00:19:51.221 "get_zone_info": false, 00:19:51.221 "zone_management": false, 00:19:51.221 "zone_append": false, 00:19:51.221 "compare": false, 00:19:51.221 "compare_and_write": false, 00:19:51.221 "abort": true, 00:19:51.221 "seek_hole": false, 00:19:51.221 "seek_data": false, 00:19:51.221 "copy": true, 00:19:51.221 "nvme_iov_md": false 00:19:51.221 }, 00:19:51.221 "memory_domains": [ 00:19:51.221 { 00:19:51.221 "dma_device_id": "system", 00:19:51.221 "dma_device_type": 1 00:19:51.221 }, 00:19:51.221 { 00:19:51.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.221 "dma_device_type": 2 00:19:51.221 } 00:19:51.221 ], 00:19:51.221 "driver_specific": {} 00:19:51.221 }' 00:19:51.221 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:51.221 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:51.221 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:51.221 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:51.221 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:51.481 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:51.481 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:51.481 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:51.481 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:51.481 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:51.481 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:51.481 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:51.481 13:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:51.740 [2024-07-26 13:19:32.155368] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:51.740 [2024-07-26 13:19:32.155393] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:51.740 [2024-07-26 13:19:32.155447] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:51.740 [2024-07-26 13:19:32.155504] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:51.740 [2024-07-26 13:19:32.155515] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe7bad0 name Existed_Raid, state offline 00:19:51.740 13:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 745030 00:19:51.740 13:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 745030 ']' 00:19:51.740 13:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 745030 00:19:51.740 13:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:19:51.740 13:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:51.740 13:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 745030 00:19:51.740 13:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:51.740 13:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:51.740 13:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 745030' 00:19:51.740 killing process with pid 745030 00:19:51.740 13:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 745030 00:19:51.740 [2024-07-26 13:19:32.233270] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:51.740 13:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 745030 00:19:51.740 [2024-07-26 13:19:32.264968] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:52.000 13:19:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:52.000 00:19:52.000 real 0m30.411s 00:19:52.000 user 0m55.868s 00:19:52.000 sys 0m5.432s 00:19:52.000 13:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:52.000 13:19:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.000 ************************************ 00:19:52.000 END TEST raid_state_function_test 00:19:52.000 ************************************ 00:19:52.000 13:19:32 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:52.000 13:19:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:52.000 13:19:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:52.000 13:19:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:52.260 ************************************ 00:19:52.260 START TEST raid_state_function_test_sb 00:19:52.260 ************************************ 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=750937 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 750937' 00:19:52.260 Process raid pid: 750937 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 750937 /var/tmp/spdk-raid.sock 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 750937 ']' 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:52.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.260 13:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.260 [2024-07-26 13:19:32.607685] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:19:52.260 [2024-07-26 13:19:32.607740] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.260 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:52.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.261 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:52.261 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:52.261 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:52.261 [2024-07-26 13:19:32.738123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.520 [2024-07-26 13:19:32.824963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.520 [2024-07-26 13:19:32.883284] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:52.520 [2024-07-26 13:19:32.883319] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:53.117 13:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.117 13:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:19:53.117 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:53.376 [2024-07-26 13:19:33.705527] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:53.376 [2024-07-26 13:19:33.705563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:53.376 [2024-07-26 13:19:33.705573] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:53.376 [2024-07-26 13:19:33.705583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:53.376 [2024-07-26 13:19:33.705591] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:53.376 [2024-07-26 13:19:33.705601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:53.376 [2024-07-26 13:19:33.705609] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:53.376 [2024-07-26 13:19:33.705620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:53.376 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:53.376 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:53.376 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:53.376 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:53.376 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:53.376 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:53.376 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.376 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.376 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.376 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.376 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.376 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.635 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:53.635 "name": "Existed_Raid", 00:19:53.635 "uuid": "34d0d8d4-7546-4cb6-b5a1-7c36895abda0", 00:19:53.635 "strip_size_kb": 64, 00:19:53.635 "state": "configuring", 00:19:53.635 "raid_level": "concat", 00:19:53.635 "superblock": true, 00:19:53.635 "num_base_bdevs": 4, 00:19:53.635 "num_base_bdevs_discovered": 0, 00:19:53.635 "num_base_bdevs_operational": 4, 00:19:53.635 "base_bdevs_list": [ 00:19:53.635 { 00:19:53.635 "name": "BaseBdev1", 00:19:53.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.635 "is_configured": false, 00:19:53.635 "data_offset": 0, 00:19:53.635 "data_size": 0 00:19:53.635 }, 00:19:53.635 { 00:19:53.635 "name": "BaseBdev2", 00:19:53.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.635 "is_configured": false, 00:19:53.635 "data_offset": 0, 00:19:53.635 "data_size": 0 00:19:53.635 }, 00:19:53.635 { 00:19:53.635 "name": "BaseBdev3", 00:19:53.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.635 "is_configured": false, 00:19:53.635 "data_offset": 0, 00:19:53.635 "data_size": 0 00:19:53.635 }, 00:19:53.635 { 00:19:53.635 "name": "BaseBdev4", 00:19:53.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.635 "is_configured": false, 00:19:53.635 "data_offset": 0, 00:19:53.635 "data_size": 0 00:19:53.635 } 00:19:53.635 ] 00:19:53.635 }' 00:19:53.635 13:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:53.635 13:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.203 13:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:54.203 [2024-07-26 13:19:34.716062] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:54.203 [2024-07-26 13:19:34.716094] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xde1f60 name Existed_Raid, state configuring 00:19:54.462 13:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:54.462 [2024-07-26 13:19:34.944685] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:54.462 [2024-07-26 13:19:34.944715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:54.462 [2024-07-26 13:19:34.944725] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:54.462 [2024-07-26 13:19:34.944736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:54.462 [2024-07-26 13:19:34.944743] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:54.462 [2024-07-26 13:19:34.944754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:54.462 [2024-07-26 13:19:34.944762] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:54.462 [2024-07-26 13:19:34.944772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:54.462 13:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:54.721 [2024-07-26 13:19:35.174702] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:54.721 BaseBdev1 00:19:54.721 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:54.721 13:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:54.721 13:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:54.721 13:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:54.721 13:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:54.721 13:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:54.721 13:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:54.980 13:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:55.238 [ 00:19:55.238 { 00:19:55.238 "name": "BaseBdev1", 00:19:55.238 "aliases": [ 00:19:55.239 "aeeeae0a-3a19-4f01-823d-1be82ac2546d" 00:19:55.239 ], 00:19:55.239 "product_name": "Malloc disk", 00:19:55.239 "block_size": 512, 00:19:55.239 "num_blocks": 65536, 00:19:55.239 "uuid": "aeeeae0a-3a19-4f01-823d-1be82ac2546d", 00:19:55.239 "assigned_rate_limits": { 00:19:55.239 "rw_ios_per_sec": 0, 00:19:55.239 "rw_mbytes_per_sec": 0, 00:19:55.239 "r_mbytes_per_sec": 0, 00:19:55.239 "w_mbytes_per_sec": 0 00:19:55.239 }, 00:19:55.239 "claimed": true, 00:19:55.239 "claim_type": "exclusive_write", 00:19:55.239 "zoned": false, 00:19:55.239 "supported_io_types": { 00:19:55.239 "read": true, 00:19:55.239 "write": true, 00:19:55.239 "unmap": true, 00:19:55.239 "flush": true, 00:19:55.239 "reset": true, 00:19:55.239 "nvme_admin": false, 00:19:55.239 "nvme_io": false, 00:19:55.239 "nvme_io_md": false, 00:19:55.239 "write_zeroes": true, 00:19:55.239 "zcopy": true, 00:19:55.239 "get_zone_info": false, 00:19:55.239 "zone_management": false, 00:19:55.239 "zone_append": false, 00:19:55.239 "compare": false, 00:19:55.239 "compare_and_write": false, 00:19:55.239 "abort": true, 00:19:55.239 "seek_hole": false, 00:19:55.239 "seek_data": false, 00:19:55.239 "copy": true, 00:19:55.239 "nvme_iov_md": false 00:19:55.239 }, 00:19:55.239 "memory_domains": [ 00:19:55.239 { 00:19:55.239 "dma_device_id": "system", 00:19:55.239 "dma_device_type": 1 00:19:55.239 }, 00:19:55.239 { 00:19:55.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.239 "dma_device_type": 2 00:19:55.239 } 00:19:55.239 ], 00:19:55.239 "driver_specific": {} 00:19:55.239 } 00:19:55.239 ] 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.239 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.498 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.498 "name": "Existed_Raid", 00:19:55.498 "uuid": "71fff421-f32f-41d1-9781-550e6d1ac2dc", 00:19:55.498 "strip_size_kb": 64, 00:19:55.498 "state": "configuring", 00:19:55.498 "raid_level": "concat", 00:19:55.498 "superblock": true, 00:19:55.498 "num_base_bdevs": 4, 00:19:55.498 "num_base_bdevs_discovered": 1, 00:19:55.498 "num_base_bdevs_operational": 4, 00:19:55.498 "base_bdevs_list": [ 00:19:55.498 { 00:19:55.498 "name": "BaseBdev1", 00:19:55.498 "uuid": "aeeeae0a-3a19-4f01-823d-1be82ac2546d", 00:19:55.498 "is_configured": true, 00:19:55.498 "data_offset": 2048, 00:19:55.498 "data_size": 63488 00:19:55.498 }, 00:19:55.498 { 00:19:55.498 "name": "BaseBdev2", 00:19:55.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.498 "is_configured": false, 00:19:55.498 "data_offset": 0, 00:19:55.498 "data_size": 0 00:19:55.498 }, 00:19:55.498 { 00:19:55.498 "name": "BaseBdev3", 00:19:55.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.498 "is_configured": false, 00:19:55.498 "data_offset": 0, 00:19:55.498 "data_size": 0 00:19:55.498 }, 00:19:55.498 { 00:19:55.498 "name": "BaseBdev4", 00:19:55.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.498 "is_configured": false, 00:19:55.498 "data_offset": 0, 00:19:55.498 "data_size": 0 00:19:55.498 } 00:19:55.498 ] 00:19:55.498 }' 00:19:55.498 13:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.498 13:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.066 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:56.324 [2024-07-26 13:19:36.638554] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:56.324 [2024-07-26 13:19:36.638590] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xde17d0 name Existed_Raid, state configuring 00:19:56.324 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:56.584 [2024-07-26 13:19:36.867203] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:56.584 [2024-07-26 13:19:36.868579] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:56.584 [2024-07-26 13:19:36.868611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:56.584 [2024-07-26 13:19:36.868621] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:56.584 [2024-07-26 13:19:36.868632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:56.584 [2024-07-26 13:19:36.868640] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:56.584 [2024-07-26 13:19:36.868651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.584 13:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.584 13:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:56.584 "name": "Existed_Raid", 00:19:56.584 "uuid": "e47f296f-e76c-4357-b809-f654aadf2ddc", 00:19:56.584 "strip_size_kb": 64, 00:19:56.584 "state": "configuring", 00:19:56.584 "raid_level": "concat", 00:19:56.584 "superblock": true, 00:19:56.584 "num_base_bdevs": 4, 00:19:56.584 "num_base_bdevs_discovered": 1, 00:19:56.584 "num_base_bdevs_operational": 4, 00:19:56.584 "base_bdevs_list": [ 00:19:56.584 { 00:19:56.584 "name": "BaseBdev1", 00:19:56.584 "uuid": "aeeeae0a-3a19-4f01-823d-1be82ac2546d", 00:19:56.584 "is_configured": true, 00:19:56.584 "data_offset": 2048, 00:19:56.584 "data_size": 63488 00:19:56.584 }, 00:19:56.584 { 00:19:56.584 "name": "BaseBdev2", 00:19:56.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.584 "is_configured": false, 00:19:56.584 "data_offset": 0, 00:19:56.584 "data_size": 0 00:19:56.584 }, 00:19:56.584 { 00:19:56.584 "name": "BaseBdev3", 00:19:56.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.584 "is_configured": false, 00:19:56.584 "data_offset": 0, 00:19:56.584 "data_size": 0 00:19:56.584 }, 00:19:56.584 { 00:19:56.584 "name": "BaseBdev4", 00:19:56.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.584 "is_configured": false, 00:19:56.584 "data_offset": 0, 00:19:56.584 "data_size": 0 00:19:56.584 } 00:19:56.584 ] 00:19:56.584 }' 00:19:56.584 13:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:56.584 13:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.150 13:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:57.407 [2024-07-26 13:19:37.788707] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:57.407 BaseBdev2 00:19:57.407 13:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:57.407 13:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:57.407 13:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:57.407 13:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:57.407 13:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:57.407 13:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:57.407 13:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:57.665 13:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:57.665 [ 00:19:57.665 { 00:19:57.665 "name": "BaseBdev2", 00:19:57.665 "aliases": [ 00:19:57.665 "40ab53e9-0851-4707-a052-5bd85087b5a4" 00:19:57.665 ], 00:19:57.665 "product_name": "Malloc disk", 00:19:57.665 "block_size": 512, 00:19:57.665 "num_blocks": 65536, 00:19:57.665 "uuid": "40ab53e9-0851-4707-a052-5bd85087b5a4", 00:19:57.665 "assigned_rate_limits": { 00:19:57.665 "rw_ios_per_sec": 0, 00:19:57.665 "rw_mbytes_per_sec": 0, 00:19:57.665 "r_mbytes_per_sec": 0, 00:19:57.665 "w_mbytes_per_sec": 0 00:19:57.665 }, 00:19:57.665 "claimed": true, 00:19:57.665 "claim_type": "exclusive_write", 00:19:57.665 "zoned": false, 00:19:57.665 "supported_io_types": { 00:19:57.665 "read": true, 00:19:57.665 "write": true, 00:19:57.665 "unmap": true, 00:19:57.665 "flush": true, 00:19:57.665 "reset": true, 00:19:57.665 "nvme_admin": false, 00:19:57.665 "nvme_io": false, 00:19:57.665 "nvme_io_md": false, 00:19:57.665 "write_zeroes": true, 00:19:57.665 "zcopy": true, 00:19:57.665 "get_zone_info": false, 00:19:57.665 "zone_management": false, 00:19:57.665 "zone_append": false, 00:19:57.665 "compare": false, 00:19:57.665 "compare_and_write": false, 00:19:57.665 "abort": true, 00:19:57.665 "seek_hole": false, 00:19:57.665 "seek_data": false, 00:19:57.665 "copy": true, 00:19:57.665 "nvme_iov_md": false 00:19:57.665 }, 00:19:57.665 "memory_domains": [ 00:19:57.665 { 00:19:57.665 "dma_device_id": "system", 00:19:57.665 "dma_device_type": 1 00:19:57.665 }, 00:19:57.665 { 00:19:57.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:57.665 "dma_device_type": 2 00:19:57.665 } 00:19:57.665 ], 00:19:57.665 "driver_specific": {} 00:19:57.665 } 00:19:57.665 ] 00:19:57.665 13:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:57.665 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:57.665 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:57.665 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:57.665 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:57.666 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:57.666 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:57.666 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:57.666 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:57.666 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.666 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.666 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.666 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.666 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.666 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.924 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:57.924 "name": "Existed_Raid", 00:19:57.924 "uuid": "e47f296f-e76c-4357-b809-f654aadf2ddc", 00:19:57.924 "strip_size_kb": 64, 00:19:57.924 "state": "configuring", 00:19:57.924 "raid_level": "concat", 00:19:57.924 "superblock": true, 00:19:57.924 "num_base_bdevs": 4, 00:19:57.924 "num_base_bdevs_discovered": 2, 00:19:57.924 "num_base_bdevs_operational": 4, 00:19:57.924 "base_bdevs_list": [ 00:19:57.924 { 00:19:57.924 "name": "BaseBdev1", 00:19:57.924 "uuid": "aeeeae0a-3a19-4f01-823d-1be82ac2546d", 00:19:57.924 "is_configured": true, 00:19:57.924 "data_offset": 2048, 00:19:57.924 "data_size": 63488 00:19:57.924 }, 00:19:57.924 { 00:19:57.924 "name": "BaseBdev2", 00:19:57.924 "uuid": "40ab53e9-0851-4707-a052-5bd85087b5a4", 00:19:57.924 "is_configured": true, 00:19:57.924 "data_offset": 2048, 00:19:57.924 "data_size": 63488 00:19:57.924 }, 00:19:57.924 { 00:19:57.924 "name": "BaseBdev3", 00:19:57.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.924 "is_configured": false, 00:19:57.924 "data_offset": 0, 00:19:57.924 "data_size": 0 00:19:57.924 }, 00:19:57.924 { 00:19:57.924 "name": "BaseBdev4", 00:19:57.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.924 "is_configured": false, 00:19:57.924 "data_offset": 0, 00:19:57.924 "data_size": 0 00:19:57.924 } 00:19:57.924 ] 00:19:57.924 }' 00:19:57.924 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:57.924 13:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.491 13:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:58.749 [2024-07-26 13:19:39.119385] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:58.749 BaseBdev3 00:19:58.749 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:58.749 13:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:58.749 13:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:58.749 13:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:58.749 13:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:58.749 13:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:58.749 13:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:59.007 13:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:59.265 [ 00:19:59.265 { 00:19:59.265 "name": "BaseBdev3", 00:19:59.265 "aliases": [ 00:19:59.265 "a3a8e9d0-21b8-4aba-a439-b57fa8fae73f" 00:19:59.266 ], 00:19:59.266 "product_name": "Malloc disk", 00:19:59.266 "block_size": 512, 00:19:59.266 "num_blocks": 65536, 00:19:59.266 "uuid": "a3a8e9d0-21b8-4aba-a439-b57fa8fae73f", 00:19:59.266 "assigned_rate_limits": { 00:19:59.266 "rw_ios_per_sec": 0, 00:19:59.266 "rw_mbytes_per_sec": 0, 00:19:59.266 "r_mbytes_per_sec": 0, 00:19:59.266 "w_mbytes_per_sec": 0 00:19:59.266 }, 00:19:59.266 "claimed": true, 00:19:59.266 "claim_type": "exclusive_write", 00:19:59.266 "zoned": false, 00:19:59.266 "supported_io_types": { 00:19:59.266 "read": true, 00:19:59.266 "write": true, 00:19:59.266 "unmap": true, 00:19:59.266 "flush": true, 00:19:59.266 "reset": true, 00:19:59.266 "nvme_admin": false, 00:19:59.266 "nvme_io": false, 00:19:59.266 "nvme_io_md": false, 00:19:59.266 "write_zeroes": true, 00:19:59.266 "zcopy": true, 00:19:59.266 "get_zone_info": false, 00:19:59.266 "zone_management": false, 00:19:59.266 "zone_append": false, 00:19:59.266 "compare": false, 00:19:59.266 "compare_and_write": false, 00:19:59.266 "abort": true, 00:19:59.266 "seek_hole": false, 00:19:59.266 "seek_data": false, 00:19:59.266 "copy": true, 00:19:59.266 "nvme_iov_md": false 00:19:59.266 }, 00:19:59.266 "memory_domains": [ 00:19:59.266 { 00:19:59.266 "dma_device_id": "system", 00:19:59.266 "dma_device_type": 1 00:19:59.266 }, 00:19:59.266 { 00:19:59.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.266 "dma_device_type": 2 00:19:59.266 } 00:19:59.266 ], 00:19:59.266 "driver_specific": {} 00:19:59.266 } 00:19:59.266 ] 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.266 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.525 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.525 "name": "Existed_Raid", 00:19:59.525 "uuid": "e47f296f-e76c-4357-b809-f654aadf2ddc", 00:19:59.525 "strip_size_kb": 64, 00:19:59.525 "state": "configuring", 00:19:59.525 "raid_level": "concat", 00:19:59.525 "superblock": true, 00:19:59.525 "num_base_bdevs": 4, 00:19:59.525 "num_base_bdevs_discovered": 3, 00:19:59.525 "num_base_bdevs_operational": 4, 00:19:59.525 "base_bdevs_list": [ 00:19:59.525 { 00:19:59.525 "name": "BaseBdev1", 00:19:59.525 "uuid": "aeeeae0a-3a19-4f01-823d-1be82ac2546d", 00:19:59.525 "is_configured": true, 00:19:59.525 "data_offset": 2048, 00:19:59.525 "data_size": 63488 00:19:59.525 }, 00:19:59.525 { 00:19:59.525 "name": "BaseBdev2", 00:19:59.525 "uuid": "40ab53e9-0851-4707-a052-5bd85087b5a4", 00:19:59.525 "is_configured": true, 00:19:59.525 "data_offset": 2048, 00:19:59.525 "data_size": 63488 00:19:59.525 }, 00:19:59.525 { 00:19:59.525 "name": "BaseBdev3", 00:19:59.525 "uuid": "a3a8e9d0-21b8-4aba-a439-b57fa8fae73f", 00:19:59.525 "is_configured": true, 00:19:59.525 "data_offset": 2048, 00:19:59.525 "data_size": 63488 00:19:59.525 }, 00:19:59.525 { 00:19:59.525 "name": "BaseBdev4", 00:19:59.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.525 "is_configured": false, 00:19:59.525 "data_offset": 0, 00:19:59.525 "data_size": 0 00:19:59.525 } 00:19:59.525 ] 00:19:59.525 }' 00:19:59.525 13:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.525 13:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.089 13:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:00.089 [2024-07-26 13:19:40.578476] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:00.089 [2024-07-26 13:19:40.578634] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xde2840 00:20:00.089 [2024-07-26 13:19:40.578647] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:00.089 [2024-07-26 13:19:40.578804] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xde2480 00:20:00.090 [2024-07-26 13:19:40.578919] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xde2840 00:20:00.090 [2024-07-26 13:19:40.578928] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xde2840 00:20:00.090 [2024-07-26 13:19:40.579011] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.090 BaseBdev4 00:20:00.090 13:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:20:00.090 13:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:00.090 13:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:00.090 13:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:00.090 13:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:00.090 13:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:00.090 13:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:00.347 13:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:00.606 [ 00:20:00.606 { 00:20:00.606 "name": "BaseBdev4", 00:20:00.606 "aliases": [ 00:20:00.606 "2de4c5ff-c93c-40b3-8c29-1e1e39883baf" 00:20:00.606 ], 00:20:00.606 "product_name": "Malloc disk", 00:20:00.606 "block_size": 512, 00:20:00.606 "num_blocks": 65536, 00:20:00.606 "uuid": "2de4c5ff-c93c-40b3-8c29-1e1e39883baf", 00:20:00.606 "assigned_rate_limits": { 00:20:00.606 "rw_ios_per_sec": 0, 00:20:00.606 "rw_mbytes_per_sec": 0, 00:20:00.606 "r_mbytes_per_sec": 0, 00:20:00.606 "w_mbytes_per_sec": 0 00:20:00.606 }, 00:20:00.606 "claimed": true, 00:20:00.606 "claim_type": "exclusive_write", 00:20:00.606 "zoned": false, 00:20:00.606 "supported_io_types": { 00:20:00.606 "read": true, 00:20:00.606 "write": true, 00:20:00.606 "unmap": true, 00:20:00.606 "flush": true, 00:20:00.606 "reset": true, 00:20:00.606 "nvme_admin": false, 00:20:00.606 "nvme_io": false, 00:20:00.606 "nvme_io_md": false, 00:20:00.606 "write_zeroes": true, 00:20:00.606 "zcopy": true, 00:20:00.606 "get_zone_info": false, 00:20:00.606 "zone_management": false, 00:20:00.606 "zone_append": false, 00:20:00.606 "compare": false, 00:20:00.606 "compare_and_write": false, 00:20:00.606 "abort": true, 00:20:00.606 "seek_hole": false, 00:20:00.606 "seek_data": false, 00:20:00.606 "copy": true, 00:20:00.606 "nvme_iov_md": false 00:20:00.606 }, 00:20:00.606 "memory_domains": [ 00:20:00.606 { 00:20:00.606 "dma_device_id": "system", 00:20:00.606 "dma_device_type": 1 00:20:00.606 }, 00:20:00.606 { 00:20:00.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.606 "dma_device_type": 2 00:20:00.606 } 00:20:00.606 ], 00:20:00.606 "driver_specific": {} 00:20:00.606 } 00:20:00.606 ] 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.606 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.864 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:00.864 "name": "Existed_Raid", 00:20:00.864 "uuid": "e47f296f-e76c-4357-b809-f654aadf2ddc", 00:20:00.864 "strip_size_kb": 64, 00:20:00.864 "state": "online", 00:20:00.864 "raid_level": "concat", 00:20:00.864 "superblock": true, 00:20:00.864 "num_base_bdevs": 4, 00:20:00.864 "num_base_bdevs_discovered": 4, 00:20:00.864 "num_base_bdevs_operational": 4, 00:20:00.864 "base_bdevs_list": [ 00:20:00.864 { 00:20:00.865 "name": "BaseBdev1", 00:20:00.865 "uuid": "aeeeae0a-3a19-4f01-823d-1be82ac2546d", 00:20:00.865 "is_configured": true, 00:20:00.865 "data_offset": 2048, 00:20:00.865 "data_size": 63488 00:20:00.865 }, 00:20:00.865 { 00:20:00.865 "name": "BaseBdev2", 00:20:00.865 "uuid": "40ab53e9-0851-4707-a052-5bd85087b5a4", 00:20:00.865 "is_configured": true, 00:20:00.865 "data_offset": 2048, 00:20:00.865 "data_size": 63488 00:20:00.865 }, 00:20:00.865 { 00:20:00.865 "name": "BaseBdev3", 00:20:00.865 "uuid": "a3a8e9d0-21b8-4aba-a439-b57fa8fae73f", 00:20:00.865 "is_configured": true, 00:20:00.865 "data_offset": 2048, 00:20:00.865 "data_size": 63488 00:20:00.865 }, 00:20:00.865 { 00:20:00.865 "name": "BaseBdev4", 00:20:00.865 "uuid": "2de4c5ff-c93c-40b3-8c29-1e1e39883baf", 00:20:00.865 "is_configured": true, 00:20:00.865 "data_offset": 2048, 00:20:00.865 "data_size": 63488 00:20:00.865 } 00:20:00.865 ] 00:20:00.865 }' 00:20:00.865 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:00.865 13:19:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:01.431 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:01.431 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:01.431 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:01.431 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:01.431 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:01.431 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:01.431 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:01.431 13:19:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:01.690 [2024-07-26 13:19:42.062698] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.690 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:01.690 "name": "Existed_Raid", 00:20:01.690 "aliases": [ 00:20:01.690 "e47f296f-e76c-4357-b809-f654aadf2ddc" 00:20:01.690 ], 00:20:01.690 "product_name": "Raid Volume", 00:20:01.690 "block_size": 512, 00:20:01.690 "num_blocks": 253952, 00:20:01.690 "uuid": "e47f296f-e76c-4357-b809-f654aadf2ddc", 00:20:01.690 "assigned_rate_limits": { 00:20:01.690 "rw_ios_per_sec": 0, 00:20:01.690 "rw_mbytes_per_sec": 0, 00:20:01.690 "r_mbytes_per_sec": 0, 00:20:01.690 "w_mbytes_per_sec": 0 00:20:01.690 }, 00:20:01.690 "claimed": false, 00:20:01.690 "zoned": false, 00:20:01.690 "supported_io_types": { 00:20:01.690 "read": true, 00:20:01.690 "write": true, 00:20:01.690 "unmap": true, 00:20:01.690 "flush": true, 00:20:01.690 "reset": true, 00:20:01.690 "nvme_admin": false, 00:20:01.690 "nvme_io": false, 00:20:01.690 "nvme_io_md": false, 00:20:01.690 "write_zeroes": true, 00:20:01.690 "zcopy": false, 00:20:01.690 "get_zone_info": false, 00:20:01.690 "zone_management": false, 00:20:01.690 "zone_append": false, 00:20:01.690 "compare": false, 00:20:01.690 "compare_and_write": false, 00:20:01.690 "abort": false, 00:20:01.690 "seek_hole": false, 00:20:01.690 "seek_data": false, 00:20:01.690 "copy": false, 00:20:01.690 "nvme_iov_md": false 00:20:01.690 }, 00:20:01.690 "memory_domains": [ 00:20:01.690 { 00:20:01.690 "dma_device_id": "system", 00:20:01.690 "dma_device_type": 1 00:20:01.690 }, 00:20:01.690 { 00:20:01.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.690 "dma_device_type": 2 00:20:01.690 }, 00:20:01.690 { 00:20:01.690 "dma_device_id": "system", 00:20:01.690 "dma_device_type": 1 00:20:01.690 }, 00:20:01.690 { 00:20:01.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.690 "dma_device_type": 2 00:20:01.690 }, 00:20:01.690 { 00:20:01.690 "dma_device_id": "system", 00:20:01.690 "dma_device_type": 1 00:20:01.690 }, 00:20:01.690 { 00:20:01.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.690 "dma_device_type": 2 00:20:01.690 }, 00:20:01.690 { 00:20:01.690 "dma_device_id": "system", 00:20:01.690 "dma_device_type": 1 00:20:01.690 }, 00:20:01.690 { 00:20:01.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.690 "dma_device_type": 2 00:20:01.690 } 00:20:01.690 ], 00:20:01.690 "driver_specific": { 00:20:01.690 "raid": { 00:20:01.690 "uuid": "e47f296f-e76c-4357-b809-f654aadf2ddc", 00:20:01.690 "strip_size_kb": 64, 00:20:01.690 "state": "online", 00:20:01.690 "raid_level": "concat", 00:20:01.690 "superblock": true, 00:20:01.690 "num_base_bdevs": 4, 00:20:01.690 "num_base_bdevs_discovered": 4, 00:20:01.690 "num_base_bdevs_operational": 4, 00:20:01.690 "base_bdevs_list": [ 00:20:01.690 { 00:20:01.690 "name": "BaseBdev1", 00:20:01.690 "uuid": "aeeeae0a-3a19-4f01-823d-1be82ac2546d", 00:20:01.690 "is_configured": true, 00:20:01.690 "data_offset": 2048, 00:20:01.690 "data_size": 63488 00:20:01.690 }, 00:20:01.690 { 00:20:01.690 "name": "BaseBdev2", 00:20:01.690 "uuid": "40ab53e9-0851-4707-a052-5bd85087b5a4", 00:20:01.690 "is_configured": true, 00:20:01.690 "data_offset": 2048, 00:20:01.690 "data_size": 63488 00:20:01.690 }, 00:20:01.690 { 00:20:01.690 "name": "BaseBdev3", 00:20:01.690 "uuid": "a3a8e9d0-21b8-4aba-a439-b57fa8fae73f", 00:20:01.690 "is_configured": true, 00:20:01.690 "data_offset": 2048, 00:20:01.690 "data_size": 63488 00:20:01.690 }, 00:20:01.690 { 00:20:01.690 "name": "BaseBdev4", 00:20:01.690 "uuid": "2de4c5ff-c93c-40b3-8c29-1e1e39883baf", 00:20:01.690 "is_configured": true, 00:20:01.690 "data_offset": 2048, 00:20:01.690 "data_size": 63488 00:20:01.690 } 00:20:01.690 ] 00:20:01.690 } 00:20:01.690 } 00:20:01.690 }' 00:20:01.690 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:01.690 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:01.690 BaseBdev2 00:20:01.690 BaseBdev3 00:20:01.690 BaseBdev4' 00:20:01.690 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:01.690 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:01.690 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:01.949 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:01.949 "name": "BaseBdev1", 00:20:01.949 "aliases": [ 00:20:01.949 "aeeeae0a-3a19-4f01-823d-1be82ac2546d" 00:20:01.949 ], 00:20:01.949 "product_name": "Malloc disk", 00:20:01.949 "block_size": 512, 00:20:01.949 "num_blocks": 65536, 00:20:01.950 "uuid": "aeeeae0a-3a19-4f01-823d-1be82ac2546d", 00:20:01.950 "assigned_rate_limits": { 00:20:01.950 "rw_ios_per_sec": 0, 00:20:01.950 "rw_mbytes_per_sec": 0, 00:20:01.950 "r_mbytes_per_sec": 0, 00:20:01.950 "w_mbytes_per_sec": 0 00:20:01.950 }, 00:20:01.950 "claimed": true, 00:20:01.950 "claim_type": "exclusive_write", 00:20:01.950 "zoned": false, 00:20:01.950 "supported_io_types": { 00:20:01.950 "read": true, 00:20:01.950 "write": true, 00:20:01.950 "unmap": true, 00:20:01.950 "flush": true, 00:20:01.950 "reset": true, 00:20:01.950 "nvme_admin": false, 00:20:01.950 "nvme_io": false, 00:20:01.950 "nvme_io_md": false, 00:20:01.950 "write_zeroes": true, 00:20:01.950 "zcopy": true, 00:20:01.950 "get_zone_info": false, 00:20:01.950 "zone_management": false, 00:20:01.950 "zone_append": false, 00:20:01.950 "compare": false, 00:20:01.950 "compare_and_write": false, 00:20:01.950 "abort": true, 00:20:01.950 "seek_hole": false, 00:20:01.950 "seek_data": false, 00:20:01.950 "copy": true, 00:20:01.950 "nvme_iov_md": false 00:20:01.950 }, 00:20:01.950 "memory_domains": [ 00:20:01.950 { 00:20:01.950 "dma_device_id": "system", 00:20:01.950 "dma_device_type": 1 00:20:01.950 }, 00:20:01.950 { 00:20:01.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.950 "dma_device_type": 2 00:20:01.950 } 00:20:01.950 ], 00:20:01.950 "driver_specific": {} 00:20:01.950 }' 00:20:01.950 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:01.950 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:01.950 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:01.950 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.209 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.209 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:02.209 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.209 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.209 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.209 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.209 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.209 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:02.209 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:02.209 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:02.209 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:02.468 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:02.468 "name": "BaseBdev2", 00:20:02.468 "aliases": [ 00:20:02.468 "40ab53e9-0851-4707-a052-5bd85087b5a4" 00:20:02.468 ], 00:20:02.468 "product_name": "Malloc disk", 00:20:02.468 "block_size": 512, 00:20:02.468 "num_blocks": 65536, 00:20:02.468 "uuid": "40ab53e9-0851-4707-a052-5bd85087b5a4", 00:20:02.468 "assigned_rate_limits": { 00:20:02.468 "rw_ios_per_sec": 0, 00:20:02.468 "rw_mbytes_per_sec": 0, 00:20:02.468 "r_mbytes_per_sec": 0, 00:20:02.468 "w_mbytes_per_sec": 0 00:20:02.468 }, 00:20:02.468 "claimed": true, 00:20:02.468 "claim_type": "exclusive_write", 00:20:02.468 "zoned": false, 00:20:02.468 "supported_io_types": { 00:20:02.468 "read": true, 00:20:02.468 "write": true, 00:20:02.468 "unmap": true, 00:20:02.468 "flush": true, 00:20:02.468 "reset": true, 00:20:02.468 "nvme_admin": false, 00:20:02.468 "nvme_io": false, 00:20:02.468 "nvme_io_md": false, 00:20:02.468 "write_zeroes": true, 00:20:02.468 "zcopy": true, 00:20:02.468 "get_zone_info": false, 00:20:02.468 "zone_management": false, 00:20:02.468 "zone_append": false, 00:20:02.468 "compare": false, 00:20:02.468 "compare_and_write": false, 00:20:02.468 "abort": true, 00:20:02.468 "seek_hole": false, 00:20:02.468 "seek_data": false, 00:20:02.468 "copy": true, 00:20:02.468 "nvme_iov_md": false 00:20:02.468 }, 00:20:02.468 "memory_domains": [ 00:20:02.468 { 00:20:02.468 "dma_device_id": "system", 00:20:02.468 "dma_device_type": 1 00:20:02.468 }, 00:20:02.468 { 00:20:02.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.468 "dma_device_type": 2 00:20:02.468 } 00:20:02.468 ], 00:20:02.468 "driver_specific": {} 00:20:02.468 }' 00:20:02.468 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.468 13:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.726 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:02.726 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.726 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.726 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:02.726 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.727 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.727 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.727 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.727 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.983 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:02.983 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:02.983 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:02.983 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:02.983 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:02.983 "name": "BaseBdev3", 00:20:02.983 "aliases": [ 00:20:02.983 "a3a8e9d0-21b8-4aba-a439-b57fa8fae73f" 00:20:02.983 ], 00:20:02.983 "product_name": "Malloc disk", 00:20:02.983 "block_size": 512, 00:20:02.983 "num_blocks": 65536, 00:20:02.983 "uuid": "a3a8e9d0-21b8-4aba-a439-b57fa8fae73f", 00:20:02.983 "assigned_rate_limits": { 00:20:02.983 "rw_ios_per_sec": 0, 00:20:02.983 "rw_mbytes_per_sec": 0, 00:20:02.983 "r_mbytes_per_sec": 0, 00:20:02.983 "w_mbytes_per_sec": 0 00:20:02.983 }, 00:20:02.983 "claimed": true, 00:20:02.984 "claim_type": "exclusive_write", 00:20:02.984 "zoned": false, 00:20:02.984 "supported_io_types": { 00:20:02.984 "read": true, 00:20:02.984 "write": true, 00:20:02.984 "unmap": true, 00:20:02.984 "flush": true, 00:20:02.984 "reset": true, 00:20:02.984 "nvme_admin": false, 00:20:02.984 "nvme_io": false, 00:20:02.984 "nvme_io_md": false, 00:20:02.984 "write_zeroes": true, 00:20:02.984 "zcopy": true, 00:20:02.984 "get_zone_info": false, 00:20:02.984 "zone_management": false, 00:20:02.984 "zone_append": false, 00:20:02.984 "compare": false, 00:20:02.984 "compare_and_write": false, 00:20:02.984 "abort": true, 00:20:02.984 "seek_hole": false, 00:20:02.984 "seek_data": false, 00:20:02.984 "copy": true, 00:20:02.984 "nvme_iov_md": false 00:20:02.984 }, 00:20:02.984 "memory_domains": [ 00:20:02.984 { 00:20:02.984 "dma_device_id": "system", 00:20:02.984 "dma_device_type": 1 00:20:02.984 }, 00:20:02.984 { 00:20:02.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.984 "dma_device_type": 2 00:20:02.984 } 00:20:02.984 ], 00:20:02.984 "driver_specific": {} 00:20:02.984 }' 00:20:02.984 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.241 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.241 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.241 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.241 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.241 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.241 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.241 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.241 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:03.241 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.500 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.500 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:03.500 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.500 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:03.500 13:19:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.759 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:03.759 "name": "BaseBdev4", 00:20:03.759 "aliases": [ 00:20:03.759 "2de4c5ff-c93c-40b3-8c29-1e1e39883baf" 00:20:03.759 ], 00:20:03.759 "product_name": "Malloc disk", 00:20:03.759 "block_size": 512, 00:20:03.759 "num_blocks": 65536, 00:20:03.759 "uuid": "2de4c5ff-c93c-40b3-8c29-1e1e39883baf", 00:20:03.759 "assigned_rate_limits": { 00:20:03.759 "rw_ios_per_sec": 0, 00:20:03.759 "rw_mbytes_per_sec": 0, 00:20:03.759 "r_mbytes_per_sec": 0, 00:20:03.759 "w_mbytes_per_sec": 0 00:20:03.759 }, 00:20:03.759 "claimed": true, 00:20:03.759 "claim_type": "exclusive_write", 00:20:03.759 "zoned": false, 00:20:03.759 "supported_io_types": { 00:20:03.759 "read": true, 00:20:03.759 "write": true, 00:20:03.759 "unmap": true, 00:20:03.759 "flush": true, 00:20:03.759 "reset": true, 00:20:03.759 "nvme_admin": false, 00:20:03.759 "nvme_io": false, 00:20:03.759 "nvme_io_md": false, 00:20:03.759 "write_zeroes": true, 00:20:03.759 "zcopy": true, 00:20:03.759 "get_zone_info": false, 00:20:03.759 "zone_management": false, 00:20:03.759 "zone_append": false, 00:20:03.759 "compare": false, 00:20:03.759 "compare_and_write": false, 00:20:03.759 "abort": true, 00:20:03.759 "seek_hole": false, 00:20:03.759 "seek_data": false, 00:20:03.759 "copy": true, 00:20:03.759 "nvme_iov_md": false 00:20:03.759 }, 00:20:03.759 "memory_domains": [ 00:20:03.759 { 00:20:03.759 "dma_device_id": "system", 00:20:03.759 "dma_device_type": 1 00:20:03.759 }, 00:20:03.759 { 00:20:03.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.759 "dma_device_type": 2 00:20:03.759 } 00:20:03.759 ], 00:20:03.759 "driver_specific": {} 00:20:03.759 }' 00:20:03.759 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.759 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.759 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.759 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.759 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.759 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.759 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.019 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.019 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:04.019 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.019 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.019 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:04.019 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:04.279 [2024-07-26 13:19:44.637299] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:04.279 [2024-07-26 13:19:44.637322] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:04.279 [2024-07-26 13:19:44.637366] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.279 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.538 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.539 "name": "Existed_Raid", 00:20:04.539 "uuid": "e47f296f-e76c-4357-b809-f654aadf2ddc", 00:20:04.539 "strip_size_kb": 64, 00:20:04.539 "state": "offline", 00:20:04.539 "raid_level": "concat", 00:20:04.539 "superblock": true, 00:20:04.539 "num_base_bdevs": 4, 00:20:04.539 "num_base_bdevs_discovered": 3, 00:20:04.539 "num_base_bdevs_operational": 3, 00:20:04.539 "base_bdevs_list": [ 00:20:04.539 { 00:20:04.539 "name": null, 00:20:04.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.539 "is_configured": false, 00:20:04.539 "data_offset": 2048, 00:20:04.539 "data_size": 63488 00:20:04.539 }, 00:20:04.539 { 00:20:04.539 "name": "BaseBdev2", 00:20:04.539 "uuid": "40ab53e9-0851-4707-a052-5bd85087b5a4", 00:20:04.539 "is_configured": true, 00:20:04.539 "data_offset": 2048, 00:20:04.539 "data_size": 63488 00:20:04.539 }, 00:20:04.539 { 00:20:04.539 "name": "BaseBdev3", 00:20:04.539 "uuid": "a3a8e9d0-21b8-4aba-a439-b57fa8fae73f", 00:20:04.539 "is_configured": true, 00:20:04.539 "data_offset": 2048, 00:20:04.539 "data_size": 63488 00:20:04.539 }, 00:20:04.539 { 00:20:04.539 "name": "BaseBdev4", 00:20:04.539 "uuid": "2de4c5ff-c93c-40b3-8c29-1e1e39883baf", 00:20:04.539 "is_configured": true, 00:20:04.539 "data_offset": 2048, 00:20:04.539 "data_size": 63488 00:20:04.539 } 00:20:04.539 ] 00:20:04.539 }' 00:20:04.539 13:19:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.539 13:19:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.107 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:05.107 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:05.107 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.107 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:05.365 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:05.365 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:05.365 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:05.365 [2024-07-26 13:19:45.889854] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:05.626 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:05.626 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:05.626 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.626 13:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:05.928 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:05.928 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:05.928 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:05.928 [2024-07-26 13:19:46.361102] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:05.928 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:05.928 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:05.928 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.928 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:06.187 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:06.187 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:06.187 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:06.447 [2024-07-26 13:19:46.816304] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:06.447 [2024-07-26 13:19:46.816340] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xde2840 name Existed_Raid, state offline 00:20:06.447 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:06.447 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:06.447 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.447 13:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:06.705 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:06.706 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:06.706 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:20:06.706 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:06.706 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:06.706 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:06.965 BaseBdev2 00:20:06.965 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:06.965 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:06.965 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:06.965 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:06.965 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:06.965 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:06.965 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:07.224 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:07.224 [ 00:20:07.224 { 00:20:07.224 "name": "BaseBdev2", 00:20:07.224 "aliases": [ 00:20:07.224 "139b908c-f2f4-4afb-9766-e2079dc16f18" 00:20:07.224 ], 00:20:07.224 "product_name": "Malloc disk", 00:20:07.224 "block_size": 512, 00:20:07.224 "num_blocks": 65536, 00:20:07.224 "uuid": "139b908c-f2f4-4afb-9766-e2079dc16f18", 00:20:07.224 "assigned_rate_limits": { 00:20:07.224 "rw_ios_per_sec": 0, 00:20:07.224 "rw_mbytes_per_sec": 0, 00:20:07.224 "r_mbytes_per_sec": 0, 00:20:07.224 "w_mbytes_per_sec": 0 00:20:07.224 }, 00:20:07.224 "claimed": false, 00:20:07.224 "zoned": false, 00:20:07.224 "supported_io_types": { 00:20:07.224 "read": true, 00:20:07.224 "write": true, 00:20:07.224 "unmap": true, 00:20:07.224 "flush": true, 00:20:07.224 "reset": true, 00:20:07.224 "nvme_admin": false, 00:20:07.224 "nvme_io": false, 00:20:07.224 "nvme_io_md": false, 00:20:07.224 "write_zeroes": true, 00:20:07.224 "zcopy": true, 00:20:07.224 "get_zone_info": false, 00:20:07.224 "zone_management": false, 00:20:07.224 "zone_append": false, 00:20:07.224 "compare": false, 00:20:07.224 "compare_and_write": false, 00:20:07.224 "abort": true, 00:20:07.224 "seek_hole": false, 00:20:07.224 "seek_data": false, 00:20:07.224 "copy": true, 00:20:07.224 "nvme_iov_md": false 00:20:07.224 }, 00:20:07.224 "memory_domains": [ 00:20:07.224 { 00:20:07.224 "dma_device_id": "system", 00:20:07.224 "dma_device_type": 1 00:20:07.224 }, 00:20:07.224 { 00:20:07.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.224 "dma_device_type": 2 00:20:07.224 } 00:20:07.224 ], 00:20:07.224 "driver_specific": {} 00:20:07.224 } 00:20:07.224 ] 00:20:07.224 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:07.224 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:07.224 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:07.224 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:07.483 BaseBdev3 00:20:07.483 13:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:07.483 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:07.483 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:07.483 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:07.483 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:07.483 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:07.483 13:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:07.742 13:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:08.001 [ 00:20:08.001 { 00:20:08.001 "name": "BaseBdev3", 00:20:08.001 "aliases": [ 00:20:08.001 "0ccd9414-9d24-4361-a872-b08aedaba33b" 00:20:08.001 ], 00:20:08.001 "product_name": "Malloc disk", 00:20:08.001 "block_size": 512, 00:20:08.001 "num_blocks": 65536, 00:20:08.001 "uuid": "0ccd9414-9d24-4361-a872-b08aedaba33b", 00:20:08.001 "assigned_rate_limits": { 00:20:08.001 "rw_ios_per_sec": 0, 00:20:08.001 "rw_mbytes_per_sec": 0, 00:20:08.001 "r_mbytes_per_sec": 0, 00:20:08.001 "w_mbytes_per_sec": 0 00:20:08.001 }, 00:20:08.001 "claimed": false, 00:20:08.001 "zoned": false, 00:20:08.001 "supported_io_types": { 00:20:08.001 "read": true, 00:20:08.001 "write": true, 00:20:08.001 "unmap": true, 00:20:08.001 "flush": true, 00:20:08.001 "reset": true, 00:20:08.001 "nvme_admin": false, 00:20:08.001 "nvme_io": false, 00:20:08.001 "nvme_io_md": false, 00:20:08.001 "write_zeroes": true, 00:20:08.001 "zcopy": true, 00:20:08.001 "get_zone_info": false, 00:20:08.001 "zone_management": false, 00:20:08.001 "zone_append": false, 00:20:08.001 "compare": false, 00:20:08.001 "compare_and_write": false, 00:20:08.001 "abort": true, 00:20:08.001 "seek_hole": false, 00:20:08.001 "seek_data": false, 00:20:08.001 "copy": true, 00:20:08.001 "nvme_iov_md": false 00:20:08.001 }, 00:20:08.001 "memory_domains": [ 00:20:08.001 { 00:20:08.001 "dma_device_id": "system", 00:20:08.001 "dma_device_type": 1 00:20:08.001 }, 00:20:08.001 { 00:20:08.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.001 "dma_device_type": 2 00:20:08.001 } 00:20:08.001 ], 00:20:08.001 "driver_specific": {} 00:20:08.001 } 00:20:08.001 ] 00:20:08.001 13:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:08.001 13:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:08.001 13:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:08.001 13:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:08.260 BaseBdev4 00:20:08.260 13:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:20:08.260 13:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:20:08.260 13:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:08.260 13:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:08.260 13:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:08.260 13:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:08.260 13:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:08.519 13:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:08.778 [ 00:20:08.778 { 00:20:08.778 "name": "BaseBdev4", 00:20:08.778 "aliases": [ 00:20:08.778 "214ed7f5-b5ab-460f-a950-5d14ea06bfe1" 00:20:08.778 ], 00:20:08.778 "product_name": "Malloc disk", 00:20:08.778 "block_size": 512, 00:20:08.778 "num_blocks": 65536, 00:20:08.778 "uuid": "214ed7f5-b5ab-460f-a950-5d14ea06bfe1", 00:20:08.778 "assigned_rate_limits": { 00:20:08.778 "rw_ios_per_sec": 0, 00:20:08.778 "rw_mbytes_per_sec": 0, 00:20:08.778 "r_mbytes_per_sec": 0, 00:20:08.778 "w_mbytes_per_sec": 0 00:20:08.778 }, 00:20:08.778 "claimed": false, 00:20:08.778 "zoned": false, 00:20:08.778 "supported_io_types": { 00:20:08.778 "read": true, 00:20:08.778 "write": true, 00:20:08.778 "unmap": true, 00:20:08.778 "flush": true, 00:20:08.778 "reset": true, 00:20:08.778 "nvme_admin": false, 00:20:08.778 "nvme_io": false, 00:20:08.778 "nvme_io_md": false, 00:20:08.778 "write_zeroes": true, 00:20:08.778 "zcopy": true, 00:20:08.778 "get_zone_info": false, 00:20:08.778 "zone_management": false, 00:20:08.778 "zone_append": false, 00:20:08.778 "compare": false, 00:20:08.778 "compare_and_write": false, 00:20:08.778 "abort": true, 00:20:08.778 "seek_hole": false, 00:20:08.778 "seek_data": false, 00:20:08.778 "copy": true, 00:20:08.778 "nvme_iov_md": false 00:20:08.778 }, 00:20:08.778 "memory_domains": [ 00:20:08.778 { 00:20:08.778 "dma_device_id": "system", 00:20:08.778 "dma_device_type": 1 00:20:08.778 }, 00:20:08.778 { 00:20:08.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.778 "dma_device_type": 2 00:20:08.778 } 00:20:08.778 ], 00:20:08.778 "driver_specific": {} 00:20:08.778 } 00:20:08.778 ] 00:20:08.778 13:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:08.778 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:08.778 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:08.778 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:08.778 [2024-07-26 13:19:49.301325] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:08.778 [2024-07-26 13:19:49.301363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:08.778 [2024-07-26 13:19:49.301379] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:08.778 [2024-07-26 13:19:49.302602] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:08.778 [2024-07-26 13:19:49.302641] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:09.037 "name": "Existed_Raid", 00:20:09.037 "uuid": "30e7914b-14c7-4bca-a34a-7e79712fcd2c", 00:20:09.037 "strip_size_kb": 64, 00:20:09.037 "state": "configuring", 00:20:09.037 "raid_level": "concat", 00:20:09.037 "superblock": true, 00:20:09.037 "num_base_bdevs": 4, 00:20:09.037 "num_base_bdevs_discovered": 3, 00:20:09.037 "num_base_bdevs_operational": 4, 00:20:09.037 "base_bdevs_list": [ 00:20:09.037 { 00:20:09.037 "name": "BaseBdev1", 00:20:09.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.037 "is_configured": false, 00:20:09.037 "data_offset": 0, 00:20:09.037 "data_size": 0 00:20:09.037 }, 00:20:09.037 { 00:20:09.037 "name": "BaseBdev2", 00:20:09.037 "uuid": "139b908c-f2f4-4afb-9766-e2079dc16f18", 00:20:09.037 "is_configured": true, 00:20:09.037 "data_offset": 2048, 00:20:09.037 "data_size": 63488 00:20:09.037 }, 00:20:09.037 { 00:20:09.037 "name": "BaseBdev3", 00:20:09.037 "uuid": "0ccd9414-9d24-4361-a872-b08aedaba33b", 00:20:09.037 "is_configured": true, 00:20:09.037 "data_offset": 2048, 00:20:09.037 "data_size": 63488 00:20:09.037 }, 00:20:09.037 { 00:20:09.037 "name": "BaseBdev4", 00:20:09.037 "uuid": "214ed7f5-b5ab-460f-a950-5d14ea06bfe1", 00:20:09.037 "is_configured": true, 00:20:09.037 "data_offset": 2048, 00:20:09.037 "data_size": 63488 00:20:09.037 } 00:20:09.037 ] 00:20:09.037 }' 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:09.037 13:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.605 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:09.863 [2024-07-26 13:19:50.315970] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:09.863 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:09.863 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:09.863 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:09.863 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:09.863 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:09.863 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:09.863 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.863 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.863 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.863 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:09.863 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.863 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.122 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.122 "name": "Existed_Raid", 00:20:10.122 "uuid": "30e7914b-14c7-4bca-a34a-7e79712fcd2c", 00:20:10.122 "strip_size_kb": 64, 00:20:10.122 "state": "configuring", 00:20:10.122 "raid_level": "concat", 00:20:10.122 "superblock": true, 00:20:10.122 "num_base_bdevs": 4, 00:20:10.122 "num_base_bdevs_discovered": 2, 00:20:10.122 "num_base_bdevs_operational": 4, 00:20:10.122 "base_bdevs_list": [ 00:20:10.122 { 00:20:10.122 "name": "BaseBdev1", 00:20:10.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.122 "is_configured": false, 00:20:10.122 "data_offset": 0, 00:20:10.122 "data_size": 0 00:20:10.122 }, 00:20:10.122 { 00:20:10.122 "name": null, 00:20:10.122 "uuid": "139b908c-f2f4-4afb-9766-e2079dc16f18", 00:20:10.122 "is_configured": false, 00:20:10.122 "data_offset": 2048, 00:20:10.122 "data_size": 63488 00:20:10.122 }, 00:20:10.122 { 00:20:10.122 "name": "BaseBdev3", 00:20:10.122 "uuid": "0ccd9414-9d24-4361-a872-b08aedaba33b", 00:20:10.122 "is_configured": true, 00:20:10.123 "data_offset": 2048, 00:20:10.123 "data_size": 63488 00:20:10.123 }, 00:20:10.123 { 00:20:10.123 "name": "BaseBdev4", 00:20:10.123 "uuid": "214ed7f5-b5ab-460f-a950-5d14ea06bfe1", 00:20:10.123 "is_configured": true, 00:20:10.123 "data_offset": 2048, 00:20:10.123 "data_size": 63488 00:20:10.123 } 00:20:10.123 ] 00:20:10.123 }' 00:20:10.123 13:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.123 13:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.691 13:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:10.691 13:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.950 13:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:10.950 13:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:11.208 [2024-07-26 13:19:51.598557] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.208 BaseBdev1 00:20:11.208 13:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:11.208 13:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:11.208 13:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:11.208 13:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:11.209 13:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:11.209 13:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:11.209 13:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:11.467 13:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:11.727 [ 00:20:11.727 { 00:20:11.727 "name": "BaseBdev1", 00:20:11.727 "aliases": [ 00:20:11.727 "4fbed157-71ea-4e98-98c9-b6ad69f5f11d" 00:20:11.727 ], 00:20:11.727 "product_name": "Malloc disk", 00:20:11.727 "block_size": 512, 00:20:11.727 "num_blocks": 65536, 00:20:11.727 "uuid": "4fbed157-71ea-4e98-98c9-b6ad69f5f11d", 00:20:11.727 "assigned_rate_limits": { 00:20:11.727 "rw_ios_per_sec": 0, 00:20:11.727 "rw_mbytes_per_sec": 0, 00:20:11.727 "r_mbytes_per_sec": 0, 00:20:11.727 "w_mbytes_per_sec": 0 00:20:11.727 }, 00:20:11.727 "claimed": true, 00:20:11.727 "claim_type": "exclusive_write", 00:20:11.727 "zoned": false, 00:20:11.727 "supported_io_types": { 00:20:11.727 "read": true, 00:20:11.727 "write": true, 00:20:11.727 "unmap": true, 00:20:11.727 "flush": true, 00:20:11.727 "reset": true, 00:20:11.727 "nvme_admin": false, 00:20:11.727 "nvme_io": false, 00:20:11.727 "nvme_io_md": false, 00:20:11.727 "write_zeroes": true, 00:20:11.727 "zcopy": true, 00:20:11.727 "get_zone_info": false, 00:20:11.727 "zone_management": false, 00:20:11.727 "zone_append": false, 00:20:11.727 "compare": false, 00:20:11.727 "compare_and_write": false, 00:20:11.727 "abort": true, 00:20:11.727 "seek_hole": false, 00:20:11.727 "seek_data": false, 00:20:11.727 "copy": true, 00:20:11.727 "nvme_iov_md": false 00:20:11.727 }, 00:20:11.727 "memory_domains": [ 00:20:11.727 { 00:20:11.727 "dma_device_id": "system", 00:20:11.727 "dma_device_type": 1 00:20:11.727 }, 00:20:11.727 { 00:20:11.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.727 "dma_device_type": 2 00:20:11.727 } 00:20:11.727 ], 00:20:11.727 "driver_specific": {} 00:20:11.727 } 00:20:11.727 ] 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.727 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.119 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:12.120 "name": "Existed_Raid", 00:20:12.120 "uuid": "30e7914b-14c7-4bca-a34a-7e79712fcd2c", 00:20:12.120 "strip_size_kb": 64, 00:20:12.120 "state": "configuring", 00:20:12.120 "raid_level": "concat", 00:20:12.120 "superblock": true, 00:20:12.120 "num_base_bdevs": 4, 00:20:12.120 "num_base_bdevs_discovered": 3, 00:20:12.120 "num_base_bdevs_operational": 4, 00:20:12.120 "base_bdevs_list": [ 00:20:12.120 { 00:20:12.120 "name": "BaseBdev1", 00:20:12.120 "uuid": "4fbed157-71ea-4e98-98c9-b6ad69f5f11d", 00:20:12.120 "is_configured": true, 00:20:12.120 "data_offset": 2048, 00:20:12.120 "data_size": 63488 00:20:12.120 }, 00:20:12.120 { 00:20:12.120 "name": null, 00:20:12.120 "uuid": "139b908c-f2f4-4afb-9766-e2079dc16f18", 00:20:12.120 "is_configured": false, 00:20:12.120 "data_offset": 2048, 00:20:12.120 "data_size": 63488 00:20:12.120 }, 00:20:12.120 { 00:20:12.120 "name": "BaseBdev3", 00:20:12.120 "uuid": "0ccd9414-9d24-4361-a872-b08aedaba33b", 00:20:12.120 "is_configured": true, 00:20:12.120 "data_offset": 2048, 00:20:12.120 "data_size": 63488 00:20:12.120 }, 00:20:12.120 { 00:20:12.120 "name": "BaseBdev4", 00:20:12.120 "uuid": "214ed7f5-b5ab-460f-a950-5d14ea06bfe1", 00:20:12.120 "is_configured": true, 00:20:12.120 "data_offset": 2048, 00:20:12.120 "data_size": 63488 00:20:12.120 } 00:20:12.120 ] 00:20:12.120 }' 00:20:12.120 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:12.120 13:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.379 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.379 13:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:12.637 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:12.637 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:12.896 [2024-07-26 13:19:53.311105] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:12.896 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:12.896 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:12.896 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:12.896 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:12.896 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:12.896 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:12.896 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.896 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.896 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.896 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.896 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.896 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.154 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.154 "name": "Existed_Raid", 00:20:13.154 "uuid": "30e7914b-14c7-4bca-a34a-7e79712fcd2c", 00:20:13.154 "strip_size_kb": 64, 00:20:13.154 "state": "configuring", 00:20:13.154 "raid_level": "concat", 00:20:13.154 "superblock": true, 00:20:13.154 "num_base_bdevs": 4, 00:20:13.154 "num_base_bdevs_discovered": 2, 00:20:13.154 "num_base_bdevs_operational": 4, 00:20:13.154 "base_bdevs_list": [ 00:20:13.154 { 00:20:13.154 "name": "BaseBdev1", 00:20:13.154 "uuid": "4fbed157-71ea-4e98-98c9-b6ad69f5f11d", 00:20:13.154 "is_configured": true, 00:20:13.154 "data_offset": 2048, 00:20:13.154 "data_size": 63488 00:20:13.154 }, 00:20:13.154 { 00:20:13.154 "name": null, 00:20:13.154 "uuid": "139b908c-f2f4-4afb-9766-e2079dc16f18", 00:20:13.154 "is_configured": false, 00:20:13.154 "data_offset": 2048, 00:20:13.154 "data_size": 63488 00:20:13.154 }, 00:20:13.154 { 00:20:13.154 "name": null, 00:20:13.154 "uuid": "0ccd9414-9d24-4361-a872-b08aedaba33b", 00:20:13.154 "is_configured": false, 00:20:13.154 "data_offset": 2048, 00:20:13.154 "data_size": 63488 00:20:13.154 }, 00:20:13.154 { 00:20:13.154 "name": "BaseBdev4", 00:20:13.154 "uuid": "214ed7f5-b5ab-460f-a950-5d14ea06bfe1", 00:20:13.154 "is_configured": true, 00:20:13.154 "data_offset": 2048, 00:20:13.154 "data_size": 63488 00:20:13.154 } 00:20:13.154 ] 00:20:13.154 }' 00:20:13.154 13:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.154 13:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.721 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.721 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:13.979 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:13.979 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:14.238 [2024-07-26 13:19:54.562455] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:14.238 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:14.238 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:14.238 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:14.238 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:14.238 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:14.238 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:14.238 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.238 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.238 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.238 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.238 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.238 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.496 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.496 "name": "Existed_Raid", 00:20:14.496 "uuid": "30e7914b-14c7-4bca-a34a-7e79712fcd2c", 00:20:14.496 "strip_size_kb": 64, 00:20:14.496 "state": "configuring", 00:20:14.496 "raid_level": "concat", 00:20:14.496 "superblock": true, 00:20:14.496 "num_base_bdevs": 4, 00:20:14.496 "num_base_bdevs_discovered": 3, 00:20:14.496 "num_base_bdevs_operational": 4, 00:20:14.496 "base_bdevs_list": [ 00:20:14.496 { 00:20:14.496 "name": "BaseBdev1", 00:20:14.496 "uuid": "4fbed157-71ea-4e98-98c9-b6ad69f5f11d", 00:20:14.496 "is_configured": true, 00:20:14.496 "data_offset": 2048, 00:20:14.496 "data_size": 63488 00:20:14.496 }, 00:20:14.496 { 00:20:14.496 "name": null, 00:20:14.496 "uuid": "139b908c-f2f4-4afb-9766-e2079dc16f18", 00:20:14.496 "is_configured": false, 00:20:14.496 "data_offset": 2048, 00:20:14.497 "data_size": 63488 00:20:14.497 }, 00:20:14.497 { 00:20:14.497 "name": "BaseBdev3", 00:20:14.497 "uuid": "0ccd9414-9d24-4361-a872-b08aedaba33b", 00:20:14.497 "is_configured": true, 00:20:14.497 "data_offset": 2048, 00:20:14.497 "data_size": 63488 00:20:14.497 }, 00:20:14.497 { 00:20:14.497 "name": "BaseBdev4", 00:20:14.497 "uuid": "214ed7f5-b5ab-460f-a950-5d14ea06bfe1", 00:20:14.497 "is_configured": true, 00:20:14.497 "data_offset": 2048, 00:20:14.497 "data_size": 63488 00:20:14.497 } 00:20:14.497 ] 00:20:14.497 }' 00:20:14.497 13:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.497 13:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.064 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:15.064 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.064 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:15.064 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:15.322 [2024-07-26 13:19:55.757611] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:15.322 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:15.322 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:15.322 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:15.322 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:15.322 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:15.322 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:15.322 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:15.322 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:15.322 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:15.322 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:15.322 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.322 13:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.581 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:15.581 "name": "Existed_Raid", 00:20:15.581 "uuid": "30e7914b-14c7-4bca-a34a-7e79712fcd2c", 00:20:15.581 "strip_size_kb": 64, 00:20:15.581 "state": "configuring", 00:20:15.581 "raid_level": "concat", 00:20:15.581 "superblock": true, 00:20:15.581 "num_base_bdevs": 4, 00:20:15.581 "num_base_bdevs_discovered": 2, 00:20:15.581 "num_base_bdevs_operational": 4, 00:20:15.581 "base_bdevs_list": [ 00:20:15.581 { 00:20:15.581 "name": null, 00:20:15.581 "uuid": "4fbed157-71ea-4e98-98c9-b6ad69f5f11d", 00:20:15.581 "is_configured": false, 00:20:15.581 "data_offset": 2048, 00:20:15.581 "data_size": 63488 00:20:15.581 }, 00:20:15.581 { 00:20:15.581 "name": null, 00:20:15.581 "uuid": "139b908c-f2f4-4afb-9766-e2079dc16f18", 00:20:15.581 "is_configured": false, 00:20:15.581 "data_offset": 2048, 00:20:15.581 "data_size": 63488 00:20:15.581 }, 00:20:15.581 { 00:20:15.581 "name": "BaseBdev3", 00:20:15.581 "uuid": "0ccd9414-9d24-4361-a872-b08aedaba33b", 00:20:15.581 "is_configured": true, 00:20:15.581 "data_offset": 2048, 00:20:15.581 "data_size": 63488 00:20:15.581 }, 00:20:15.581 { 00:20:15.581 "name": "BaseBdev4", 00:20:15.581 "uuid": "214ed7f5-b5ab-460f-a950-5d14ea06bfe1", 00:20:15.581 "is_configured": true, 00:20:15.581 "data_offset": 2048, 00:20:15.581 "data_size": 63488 00:20:15.581 } 00:20:15.581 ] 00:20:15.581 }' 00:20:15.581 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:15.581 13:19:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.150 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.150 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:16.409 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:16.409 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:16.668 [2024-07-26 13:19:56.958904] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:16.668 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:20:16.668 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.668 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:16.668 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:16.668 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:16.668 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:16.668 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.668 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.668 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.668 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.668 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.668 13:19:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.927 13:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.927 "name": "Existed_Raid", 00:20:16.927 "uuid": "30e7914b-14c7-4bca-a34a-7e79712fcd2c", 00:20:16.927 "strip_size_kb": 64, 00:20:16.927 "state": "configuring", 00:20:16.927 "raid_level": "concat", 00:20:16.927 "superblock": true, 00:20:16.927 "num_base_bdevs": 4, 00:20:16.927 "num_base_bdevs_discovered": 3, 00:20:16.927 "num_base_bdevs_operational": 4, 00:20:16.927 "base_bdevs_list": [ 00:20:16.927 { 00:20:16.927 "name": null, 00:20:16.927 "uuid": "4fbed157-71ea-4e98-98c9-b6ad69f5f11d", 00:20:16.927 "is_configured": false, 00:20:16.927 "data_offset": 2048, 00:20:16.927 "data_size": 63488 00:20:16.927 }, 00:20:16.927 { 00:20:16.927 "name": "BaseBdev2", 00:20:16.927 "uuid": "139b908c-f2f4-4afb-9766-e2079dc16f18", 00:20:16.927 "is_configured": true, 00:20:16.927 "data_offset": 2048, 00:20:16.927 "data_size": 63488 00:20:16.927 }, 00:20:16.927 { 00:20:16.927 "name": "BaseBdev3", 00:20:16.927 "uuid": "0ccd9414-9d24-4361-a872-b08aedaba33b", 00:20:16.927 "is_configured": true, 00:20:16.927 "data_offset": 2048, 00:20:16.927 "data_size": 63488 00:20:16.927 }, 00:20:16.927 { 00:20:16.927 "name": "BaseBdev4", 00:20:16.927 "uuid": "214ed7f5-b5ab-460f-a950-5d14ea06bfe1", 00:20:16.927 "is_configured": true, 00:20:16.927 "data_offset": 2048, 00:20:16.927 "data_size": 63488 00:20:16.927 } 00:20:16.927 ] 00:20:16.927 }' 00:20:16.927 13:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.927 13:19:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:17.495 13:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:17.495 13:19:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.495 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:17.495 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.495 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:17.753 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4fbed157-71ea-4e98-98c9-b6ad69f5f11d 00:20:18.012 [2024-07-26 13:19:58.462060] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:18.012 [2024-07-26 13:19:58.462214] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xf860c0 00:20:18.012 [2024-07-26 13:19:58.462227] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:18.012 [2024-07-26 13:19:58.462389] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xde1c40 00:20:18.012 [2024-07-26 13:19:58.462499] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf860c0 00:20:18.012 [2024-07-26 13:19:58.462508] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf860c0 00:20:18.012 [2024-07-26 13:19:58.462592] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.012 NewBaseBdev 00:20:18.012 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:18.012 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:18.012 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:18.012 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:18.012 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:18.012 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:18.012 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:18.271 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:18.530 [ 00:20:18.530 { 00:20:18.530 "name": "NewBaseBdev", 00:20:18.530 "aliases": [ 00:20:18.530 "4fbed157-71ea-4e98-98c9-b6ad69f5f11d" 00:20:18.530 ], 00:20:18.530 "product_name": "Malloc disk", 00:20:18.530 "block_size": 512, 00:20:18.530 "num_blocks": 65536, 00:20:18.530 "uuid": "4fbed157-71ea-4e98-98c9-b6ad69f5f11d", 00:20:18.530 "assigned_rate_limits": { 00:20:18.530 "rw_ios_per_sec": 0, 00:20:18.530 "rw_mbytes_per_sec": 0, 00:20:18.530 "r_mbytes_per_sec": 0, 00:20:18.530 "w_mbytes_per_sec": 0 00:20:18.530 }, 00:20:18.530 "claimed": true, 00:20:18.530 "claim_type": "exclusive_write", 00:20:18.530 "zoned": false, 00:20:18.530 "supported_io_types": { 00:20:18.530 "read": true, 00:20:18.530 "write": true, 00:20:18.530 "unmap": true, 00:20:18.530 "flush": true, 00:20:18.530 "reset": true, 00:20:18.530 "nvme_admin": false, 00:20:18.530 "nvme_io": false, 00:20:18.530 "nvme_io_md": false, 00:20:18.530 "write_zeroes": true, 00:20:18.530 "zcopy": true, 00:20:18.530 "get_zone_info": false, 00:20:18.530 "zone_management": false, 00:20:18.530 "zone_append": false, 00:20:18.530 "compare": false, 00:20:18.530 "compare_and_write": false, 00:20:18.530 "abort": true, 00:20:18.530 "seek_hole": false, 00:20:18.530 "seek_data": false, 00:20:18.530 "copy": true, 00:20:18.530 "nvme_iov_md": false 00:20:18.530 }, 00:20:18.530 "memory_domains": [ 00:20:18.530 { 00:20:18.530 "dma_device_id": "system", 00:20:18.530 "dma_device_type": 1 00:20:18.530 }, 00:20:18.530 { 00:20:18.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.530 "dma_device_type": 2 00:20:18.530 } 00:20:18.530 ], 00:20:18.530 "driver_specific": {} 00:20:18.530 } 00:20:18.530 ] 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.530 13:19:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.827 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.827 "name": "Existed_Raid", 00:20:18.827 "uuid": "30e7914b-14c7-4bca-a34a-7e79712fcd2c", 00:20:18.827 "strip_size_kb": 64, 00:20:18.827 "state": "online", 00:20:18.827 "raid_level": "concat", 00:20:18.827 "superblock": true, 00:20:18.827 "num_base_bdevs": 4, 00:20:18.827 "num_base_bdevs_discovered": 4, 00:20:18.827 "num_base_bdevs_operational": 4, 00:20:18.827 "base_bdevs_list": [ 00:20:18.827 { 00:20:18.827 "name": "NewBaseBdev", 00:20:18.827 "uuid": "4fbed157-71ea-4e98-98c9-b6ad69f5f11d", 00:20:18.827 "is_configured": true, 00:20:18.827 "data_offset": 2048, 00:20:18.827 "data_size": 63488 00:20:18.827 }, 00:20:18.827 { 00:20:18.827 "name": "BaseBdev2", 00:20:18.827 "uuid": "139b908c-f2f4-4afb-9766-e2079dc16f18", 00:20:18.827 "is_configured": true, 00:20:18.827 "data_offset": 2048, 00:20:18.827 "data_size": 63488 00:20:18.827 }, 00:20:18.827 { 00:20:18.827 "name": "BaseBdev3", 00:20:18.827 "uuid": "0ccd9414-9d24-4361-a872-b08aedaba33b", 00:20:18.827 "is_configured": true, 00:20:18.827 "data_offset": 2048, 00:20:18.827 "data_size": 63488 00:20:18.827 }, 00:20:18.827 { 00:20:18.827 "name": "BaseBdev4", 00:20:18.827 "uuid": "214ed7f5-b5ab-460f-a950-5d14ea06bfe1", 00:20:18.827 "is_configured": true, 00:20:18.827 "data_offset": 2048, 00:20:18.827 "data_size": 63488 00:20:18.827 } 00:20:18.827 ] 00:20:18.827 }' 00:20:18.827 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.828 13:19:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:19.395 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:19.395 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:19.395 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:19.395 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:19.395 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:19.395 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:19.395 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:19.395 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:19.395 [2024-07-26 13:19:59.902132] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:19.654 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:19.654 "name": "Existed_Raid", 00:20:19.654 "aliases": [ 00:20:19.654 "30e7914b-14c7-4bca-a34a-7e79712fcd2c" 00:20:19.654 ], 00:20:19.654 "product_name": "Raid Volume", 00:20:19.654 "block_size": 512, 00:20:19.654 "num_blocks": 253952, 00:20:19.654 "uuid": "30e7914b-14c7-4bca-a34a-7e79712fcd2c", 00:20:19.654 "assigned_rate_limits": { 00:20:19.654 "rw_ios_per_sec": 0, 00:20:19.654 "rw_mbytes_per_sec": 0, 00:20:19.654 "r_mbytes_per_sec": 0, 00:20:19.654 "w_mbytes_per_sec": 0 00:20:19.654 }, 00:20:19.654 "claimed": false, 00:20:19.654 "zoned": false, 00:20:19.654 "supported_io_types": { 00:20:19.654 "read": true, 00:20:19.654 "write": true, 00:20:19.654 "unmap": true, 00:20:19.654 "flush": true, 00:20:19.654 "reset": true, 00:20:19.654 "nvme_admin": false, 00:20:19.654 "nvme_io": false, 00:20:19.654 "nvme_io_md": false, 00:20:19.654 "write_zeroes": true, 00:20:19.654 "zcopy": false, 00:20:19.654 "get_zone_info": false, 00:20:19.654 "zone_management": false, 00:20:19.654 "zone_append": false, 00:20:19.654 "compare": false, 00:20:19.654 "compare_and_write": false, 00:20:19.654 "abort": false, 00:20:19.654 "seek_hole": false, 00:20:19.654 "seek_data": false, 00:20:19.654 "copy": false, 00:20:19.654 "nvme_iov_md": false 00:20:19.654 }, 00:20:19.654 "memory_domains": [ 00:20:19.654 { 00:20:19.654 "dma_device_id": "system", 00:20:19.654 "dma_device_type": 1 00:20:19.654 }, 00:20:19.654 { 00:20:19.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.654 "dma_device_type": 2 00:20:19.654 }, 00:20:19.654 { 00:20:19.654 "dma_device_id": "system", 00:20:19.654 "dma_device_type": 1 00:20:19.654 }, 00:20:19.654 { 00:20:19.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.654 "dma_device_type": 2 00:20:19.654 }, 00:20:19.654 { 00:20:19.654 "dma_device_id": "system", 00:20:19.654 "dma_device_type": 1 00:20:19.654 }, 00:20:19.654 { 00:20:19.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.654 "dma_device_type": 2 00:20:19.654 }, 00:20:19.654 { 00:20:19.654 "dma_device_id": "system", 00:20:19.654 "dma_device_type": 1 00:20:19.654 }, 00:20:19.654 { 00:20:19.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.654 "dma_device_type": 2 00:20:19.654 } 00:20:19.654 ], 00:20:19.654 "driver_specific": { 00:20:19.654 "raid": { 00:20:19.654 "uuid": "30e7914b-14c7-4bca-a34a-7e79712fcd2c", 00:20:19.654 "strip_size_kb": 64, 00:20:19.654 "state": "online", 00:20:19.654 "raid_level": "concat", 00:20:19.654 "superblock": true, 00:20:19.654 "num_base_bdevs": 4, 00:20:19.654 "num_base_bdevs_discovered": 4, 00:20:19.654 "num_base_bdevs_operational": 4, 00:20:19.654 "base_bdevs_list": [ 00:20:19.654 { 00:20:19.654 "name": "NewBaseBdev", 00:20:19.654 "uuid": "4fbed157-71ea-4e98-98c9-b6ad69f5f11d", 00:20:19.654 "is_configured": true, 00:20:19.654 "data_offset": 2048, 00:20:19.654 "data_size": 63488 00:20:19.654 }, 00:20:19.654 { 00:20:19.654 "name": "BaseBdev2", 00:20:19.654 "uuid": "139b908c-f2f4-4afb-9766-e2079dc16f18", 00:20:19.654 "is_configured": true, 00:20:19.654 "data_offset": 2048, 00:20:19.654 "data_size": 63488 00:20:19.654 }, 00:20:19.654 { 00:20:19.654 "name": "BaseBdev3", 00:20:19.654 "uuid": "0ccd9414-9d24-4361-a872-b08aedaba33b", 00:20:19.654 "is_configured": true, 00:20:19.654 "data_offset": 2048, 00:20:19.654 "data_size": 63488 00:20:19.654 }, 00:20:19.654 { 00:20:19.654 "name": "BaseBdev4", 00:20:19.654 "uuid": "214ed7f5-b5ab-460f-a950-5d14ea06bfe1", 00:20:19.654 "is_configured": true, 00:20:19.654 "data_offset": 2048, 00:20:19.654 "data_size": 63488 00:20:19.654 } 00:20:19.654 ] 00:20:19.654 } 00:20:19.654 } 00:20:19.654 }' 00:20:19.654 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:19.654 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:19.654 BaseBdev2 00:20:19.654 BaseBdev3 00:20:19.654 BaseBdev4' 00:20:19.654 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:19.655 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:19.655 13:19:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:19.913 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:19.913 "name": "NewBaseBdev", 00:20:19.913 "aliases": [ 00:20:19.913 "4fbed157-71ea-4e98-98c9-b6ad69f5f11d" 00:20:19.913 ], 00:20:19.913 "product_name": "Malloc disk", 00:20:19.913 "block_size": 512, 00:20:19.913 "num_blocks": 65536, 00:20:19.913 "uuid": "4fbed157-71ea-4e98-98c9-b6ad69f5f11d", 00:20:19.913 "assigned_rate_limits": { 00:20:19.913 "rw_ios_per_sec": 0, 00:20:19.913 "rw_mbytes_per_sec": 0, 00:20:19.913 "r_mbytes_per_sec": 0, 00:20:19.913 "w_mbytes_per_sec": 0 00:20:19.913 }, 00:20:19.913 "claimed": true, 00:20:19.913 "claim_type": "exclusive_write", 00:20:19.913 "zoned": false, 00:20:19.913 "supported_io_types": { 00:20:19.913 "read": true, 00:20:19.913 "write": true, 00:20:19.913 "unmap": true, 00:20:19.913 "flush": true, 00:20:19.913 "reset": true, 00:20:19.913 "nvme_admin": false, 00:20:19.913 "nvme_io": false, 00:20:19.913 "nvme_io_md": false, 00:20:19.913 "write_zeroes": true, 00:20:19.913 "zcopy": true, 00:20:19.913 "get_zone_info": false, 00:20:19.913 "zone_management": false, 00:20:19.913 "zone_append": false, 00:20:19.913 "compare": false, 00:20:19.913 "compare_and_write": false, 00:20:19.913 "abort": true, 00:20:19.913 "seek_hole": false, 00:20:19.913 "seek_data": false, 00:20:19.913 "copy": true, 00:20:19.913 "nvme_iov_md": false 00:20:19.913 }, 00:20:19.913 "memory_domains": [ 00:20:19.913 { 00:20:19.913 "dma_device_id": "system", 00:20:19.913 "dma_device_type": 1 00:20:19.913 }, 00:20:19.913 { 00:20:19.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.913 "dma_device_type": 2 00:20:19.913 } 00:20:19.913 ], 00:20:19.913 "driver_specific": {} 00:20:19.913 }' 00:20:19.913 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:19.914 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:19.914 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:19.914 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:19.914 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:19.914 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:19.914 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:19.914 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:19.914 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:19.914 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:20.173 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:20.173 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:20.173 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:20.173 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:20.173 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:20.432 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:20.432 "name": "BaseBdev2", 00:20:20.432 "aliases": [ 00:20:20.432 "139b908c-f2f4-4afb-9766-e2079dc16f18" 00:20:20.432 ], 00:20:20.432 "product_name": "Malloc disk", 00:20:20.432 "block_size": 512, 00:20:20.432 "num_blocks": 65536, 00:20:20.432 "uuid": "139b908c-f2f4-4afb-9766-e2079dc16f18", 00:20:20.432 "assigned_rate_limits": { 00:20:20.432 "rw_ios_per_sec": 0, 00:20:20.432 "rw_mbytes_per_sec": 0, 00:20:20.432 "r_mbytes_per_sec": 0, 00:20:20.432 "w_mbytes_per_sec": 0 00:20:20.432 }, 00:20:20.432 "claimed": true, 00:20:20.432 "claim_type": "exclusive_write", 00:20:20.432 "zoned": false, 00:20:20.432 "supported_io_types": { 00:20:20.432 "read": true, 00:20:20.432 "write": true, 00:20:20.432 "unmap": true, 00:20:20.432 "flush": true, 00:20:20.432 "reset": true, 00:20:20.432 "nvme_admin": false, 00:20:20.432 "nvme_io": false, 00:20:20.432 "nvme_io_md": false, 00:20:20.432 "write_zeroes": true, 00:20:20.432 "zcopy": true, 00:20:20.432 "get_zone_info": false, 00:20:20.432 "zone_management": false, 00:20:20.432 "zone_append": false, 00:20:20.432 "compare": false, 00:20:20.432 "compare_and_write": false, 00:20:20.432 "abort": true, 00:20:20.432 "seek_hole": false, 00:20:20.432 "seek_data": false, 00:20:20.432 "copy": true, 00:20:20.432 "nvme_iov_md": false 00:20:20.432 }, 00:20:20.432 "memory_domains": [ 00:20:20.432 { 00:20:20.432 "dma_device_id": "system", 00:20:20.432 "dma_device_type": 1 00:20:20.432 }, 00:20:20.432 { 00:20:20.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.432 "dma_device_type": 2 00:20:20.432 } 00:20:20.432 ], 00:20:20.432 "driver_specific": {} 00:20:20.432 }' 00:20:20.432 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:20.432 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:20.432 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:20.432 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:20.432 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:20.432 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:20.432 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:20.691 13:20:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:20.691 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:20.691 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:20.691 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:20.691 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:20.691 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:20.691 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:20.691 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:20.949 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:20.949 "name": "BaseBdev3", 00:20:20.949 "aliases": [ 00:20:20.949 "0ccd9414-9d24-4361-a872-b08aedaba33b" 00:20:20.949 ], 00:20:20.949 "product_name": "Malloc disk", 00:20:20.949 "block_size": 512, 00:20:20.949 "num_blocks": 65536, 00:20:20.949 "uuid": "0ccd9414-9d24-4361-a872-b08aedaba33b", 00:20:20.949 "assigned_rate_limits": { 00:20:20.949 "rw_ios_per_sec": 0, 00:20:20.949 "rw_mbytes_per_sec": 0, 00:20:20.949 "r_mbytes_per_sec": 0, 00:20:20.949 "w_mbytes_per_sec": 0 00:20:20.949 }, 00:20:20.949 "claimed": true, 00:20:20.949 "claim_type": "exclusive_write", 00:20:20.949 "zoned": false, 00:20:20.949 "supported_io_types": { 00:20:20.949 "read": true, 00:20:20.949 "write": true, 00:20:20.949 "unmap": true, 00:20:20.949 "flush": true, 00:20:20.949 "reset": true, 00:20:20.949 "nvme_admin": false, 00:20:20.949 "nvme_io": false, 00:20:20.949 "nvme_io_md": false, 00:20:20.949 "write_zeroes": true, 00:20:20.949 "zcopy": true, 00:20:20.949 "get_zone_info": false, 00:20:20.949 "zone_management": false, 00:20:20.949 "zone_append": false, 00:20:20.949 "compare": false, 00:20:20.949 "compare_and_write": false, 00:20:20.949 "abort": true, 00:20:20.949 "seek_hole": false, 00:20:20.949 "seek_data": false, 00:20:20.949 "copy": true, 00:20:20.949 "nvme_iov_md": false 00:20:20.949 }, 00:20:20.949 "memory_domains": [ 00:20:20.949 { 00:20:20.949 "dma_device_id": "system", 00:20:20.949 "dma_device_type": 1 00:20:20.949 }, 00:20:20.949 { 00:20:20.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.949 "dma_device_type": 2 00:20:20.949 } 00:20:20.949 ], 00:20:20.949 "driver_specific": {} 00:20:20.949 }' 00:20:20.949 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:20.949 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:20.949 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:20.949 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:20.949 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.208 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:21.208 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.208 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.208 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:21.208 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.208 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.208 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:21.208 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:21.208 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:20:21.208 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:21.467 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:21.467 "name": "BaseBdev4", 00:20:21.467 "aliases": [ 00:20:21.467 "214ed7f5-b5ab-460f-a950-5d14ea06bfe1" 00:20:21.467 ], 00:20:21.467 "product_name": "Malloc disk", 00:20:21.467 "block_size": 512, 00:20:21.467 "num_blocks": 65536, 00:20:21.467 "uuid": "214ed7f5-b5ab-460f-a950-5d14ea06bfe1", 00:20:21.467 "assigned_rate_limits": { 00:20:21.467 "rw_ios_per_sec": 0, 00:20:21.467 "rw_mbytes_per_sec": 0, 00:20:21.467 "r_mbytes_per_sec": 0, 00:20:21.467 "w_mbytes_per_sec": 0 00:20:21.467 }, 00:20:21.467 "claimed": true, 00:20:21.467 "claim_type": "exclusive_write", 00:20:21.467 "zoned": false, 00:20:21.467 "supported_io_types": { 00:20:21.467 "read": true, 00:20:21.467 "write": true, 00:20:21.467 "unmap": true, 00:20:21.467 "flush": true, 00:20:21.467 "reset": true, 00:20:21.467 "nvme_admin": false, 00:20:21.467 "nvme_io": false, 00:20:21.467 "nvme_io_md": false, 00:20:21.467 "write_zeroes": true, 00:20:21.467 "zcopy": true, 00:20:21.467 "get_zone_info": false, 00:20:21.467 "zone_management": false, 00:20:21.467 "zone_append": false, 00:20:21.467 "compare": false, 00:20:21.467 "compare_and_write": false, 00:20:21.467 "abort": true, 00:20:21.467 "seek_hole": false, 00:20:21.467 "seek_data": false, 00:20:21.467 "copy": true, 00:20:21.467 "nvme_iov_md": false 00:20:21.467 }, 00:20:21.467 "memory_domains": [ 00:20:21.467 { 00:20:21.467 "dma_device_id": "system", 00:20:21.467 "dma_device_type": 1 00:20:21.467 }, 00:20:21.467 { 00:20:21.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.467 "dma_device_type": 2 00:20:21.467 } 00:20:21.467 ], 00:20:21.467 "driver_specific": {} 00:20:21.467 }' 00:20:21.467 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.467 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.467 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:21.467 13:20:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.726 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.726 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:21.726 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.726 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.726 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:21.726 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.726 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.726 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:21.726 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:21.985 [2024-07-26 13:20:02.436622] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:21.985 [2024-07-26 13:20:02.436646] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:21.985 [2024-07-26 13:20:02.436692] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.985 [2024-07-26 13:20:02.436749] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:21.985 [2024-07-26 13:20:02.436760] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf860c0 name Existed_Raid, state offline 00:20:21.985 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 750937 00:20:21.985 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 750937 ']' 00:20:21.985 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 750937 00:20:21.985 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:20:21.985 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.985 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 750937 00:20:22.244 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:22.244 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:22.244 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 750937' 00:20:22.244 killing process with pid 750937 00:20:22.244 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 750937 00:20:22.244 [2024-07-26 13:20:02.514484] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:22.244 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 750937 00:20:22.244 [2024-07-26 13:20:02.545999] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:22.244 13:20:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:22.244 00:20:22.244 real 0m30.192s 00:20:22.244 user 0m55.336s 00:20:22.244 sys 0m5.496s 00:20:22.244 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:22.244 13:20:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.244 ************************************ 00:20:22.244 END TEST raid_state_function_test_sb 00:20:22.244 ************************************ 00:20:22.503 13:20:02 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:20:22.503 13:20:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:22.503 13:20:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:22.503 13:20:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:22.503 ************************************ 00:20:22.503 START TEST raid_superblock_test 00:20:22.503 ************************************ 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=756630 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 756630 /var/tmp/spdk-raid.sock 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 756630 ']' 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:22.503 13:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.504 13:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:22.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:22.504 13:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.504 13:20:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:22.504 [2024-07-26 13:20:02.873957] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:20:22.504 [2024-07-26 13:20:02.874015] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid756630 ] 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:22.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:22.504 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:22.504 [2024-07-26 13:20:03.005941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.762 [2024-07-26 13:20:03.093370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.762 [2024-07-26 13:20:03.151032] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.762 [2024-07-26 13:20:03.151063] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:23.331 13:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:23.331 13:20:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:20:23.331 13:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:20:23.331 13:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:23.331 13:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:20:23.331 13:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:20:23.331 13:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:23.331 13:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:23.331 13:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:23.331 13:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:23.331 13:20:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:23.590 malloc1 00:20:23.590 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:23.849 [2024-07-26 13:20:04.235397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:23.849 [2024-07-26 13:20:04.235440] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.849 [2024-07-26 13:20:04.235458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c042f0 00:20:23.849 [2024-07-26 13:20:04.235470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.849 [2024-07-26 13:20:04.236915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.849 [2024-07-26 13:20:04.236943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:23.849 pt1 00:20:23.849 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:23.849 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:23.849 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:20:23.849 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:20:23.849 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:23.849 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:23.849 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:23.849 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:23.849 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:24.107 malloc2 00:20:24.108 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:24.367 [2024-07-26 13:20:04.697070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:24.367 [2024-07-26 13:20:04.697113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.367 [2024-07-26 13:20:04.697129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c056d0 00:20:24.367 [2024-07-26 13:20:04.697144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.367 [2024-07-26 13:20:04.698536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.367 [2024-07-26 13:20:04.698562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:24.367 pt2 00:20:24.367 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:24.367 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:24.367 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:20:24.367 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:20:24.367 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:24.367 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:24.367 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:24.367 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:24.367 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:24.626 malloc3 00:20:24.626 13:20:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:24.885 [2024-07-26 13:20:05.154661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:24.885 [2024-07-26 13:20:05.154702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.885 [2024-07-26 13:20:05.154718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d9e6b0 00:20:24.885 [2024-07-26 13:20:05.154729] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.885 [2024-07-26 13:20:05.156080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.885 [2024-07-26 13:20:05.156107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:24.885 pt3 00:20:24.885 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:24.885 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:24.885 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:20:24.885 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:20:24.885 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:24.885 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:24.885 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:20:24.885 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:24.885 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:24.885 malloc4 00:20:24.885 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:25.144 [2024-07-26 13:20:05.612106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:25.144 [2024-07-26 13:20:05.612151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.144 [2024-07-26 13:20:05.612167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d9c370 00:20:25.144 [2024-07-26 13:20:05.612178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.144 [2024-07-26 13:20:05.613526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.144 [2024-07-26 13:20:05.613552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:25.144 pt4 00:20:25.144 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:20:25.144 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:20:25.144 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:25.403 [2024-07-26 13:20:05.828694] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:25.403 [2024-07-26 13:20:05.829807] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:25.403 [2024-07-26 13:20:05.829858] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:25.403 [2024-07-26 13:20:05.829900] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:25.403 [2024-07-26 13:20:05.830047] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bfd560 00:20:25.403 [2024-07-26 13:20:05.830057] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:25.403 [2024-07-26 13:20:05.830248] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bfe4d0 00:20:25.403 [2024-07-26 13:20:05.830372] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bfd560 00:20:25.403 [2024-07-26 13:20:05.830381] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1bfd560 00:20:25.403 [2024-07-26 13:20:05.830481] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.403 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:25.403 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:25.403 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:25.403 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:25.403 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:25.403 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:25.403 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:25.403 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:25.403 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:25.403 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:25.403 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.403 13:20:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.663 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.663 "name": "raid_bdev1", 00:20:25.663 "uuid": "f2c1846a-3b34-480a-a608-21f56080565a", 00:20:25.663 "strip_size_kb": 64, 00:20:25.663 "state": "online", 00:20:25.663 "raid_level": "concat", 00:20:25.663 "superblock": true, 00:20:25.663 "num_base_bdevs": 4, 00:20:25.663 "num_base_bdevs_discovered": 4, 00:20:25.663 "num_base_bdevs_operational": 4, 00:20:25.663 "base_bdevs_list": [ 00:20:25.663 { 00:20:25.663 "name": "pt1", 00:20:25.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:25.663 "is_configured": true, 00:20:25.663 "data_offset": 2048, 00:20:25.663 "data_size": 63488 00:20:25.663 }, 00:20:25.663 { 00:20:25.663 "name": "pt2", 00:20:25.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:25.663 "is_configured": true, 00:20:25.663 "data_offset": 2048, 00:20:25.663 "data_size": 63488 00:20:25.663 }, 00:20:25.663 { 00:20:25.663 "name": "pt3", 00:20:25.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:25.663 "is_configured": true, 00:20:25.663 "data_offset": 2048, 00:20:25.663 "data_size": 63488 00:20:25.663 }, 00:20:25.663 { 00:20:25.663 "name": "pt4", 00:20:25.663 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:25.663 "is_configured": true, 00:20:25.663 "data_offset": 2048, 00:20:25.663 "data_size": 63488 00:20:25.663 } 00:20:25.663 ] 00:20:25.663 }' 00:20:25.663 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.663 13:20:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.230 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:20:26.230 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:26.230 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:26.230 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:26.230 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:26.230 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:26.230 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:26.230 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:26.489 [2024-07-26 13:20:06.859692] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:26.490 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:26.490 "name": "raid_bdev1", 00:20:26.490 "aliases": [ 00:20:26.490 "f2c1846a-3b34-480a-a608-21f56080565a" 00:20:26.490 ], 00:20:26.490 "product_name": "Raid Volume", 00:20:26.490 "block_size": 512, 00:20:26.490 "num_blocks": 253952, 00:20:26.490 "uuid": "f2c1846a-3b34-480a-a608-21f56080565a", 00:20:26.490 "assigned_rate_limits": { 00:20:26.490 "rw_ios_per_sec": 0, 00:20:26.490 "rw_mbytes_per_sec": 0, 00:20:26.490 "r_mbytes_per_sec": 0, 00:20:26.490 "w_mbytes_per_sec": 0 00:20:26.490 }, 00:20:26.490 "claimed": false, 00:20:26.490 "zoned": false, 00:20:26.490 "supported_io_types": { 00:20:26.490 "read": true, 00:20:26.490 "write": true, 00:20:26.490 "unmap": true, 00:20:26.490 "flush": true, 00:20:26.490 "reset": true, 00:20:26.490 "nvme_admin": false, 00:20:26.490 "nvme_io": false, 00:20:26.490 "nvme_io_md": false, 00:20:26.490 "write_zeroes": true, 00:20:26.490 "zcopy": false, 00:20:26.490 "get_zone_info": false, 00:20:26.490 "zone_management": false, 00:20:26.490 "zone_append": false, 00:20:26.490 "compare": false, 00:20:26.490 "compare_and_write": false, 00:20:26.490 "abort": false, 00:20:26.490 "seek_hole": false, 00:20:26.490 "seek_data": false, 00:20:26.490 "copy": false, 00:20:26.490 "nvme_iov_md": false 00:20:26.490 }, 00:20:26.490 "memory_domains": [ 00:20:26.490 { 00:20:26.490 "dma_device_id": "system", 00:20:26.490 "dma_device_type": 1 00:20:26.490 }, 00:20:26.490 { 00:20:26.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.490 "dma_device_type": 2 00:20:26.490 }, 00:20:26.490 { 00:20:26.490 "dma_device_id": "system", 00:20:26.490 "dma_device_type": 1 00:20:26.490 }, 00:20:26.490 { 00:20:26.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.490 "dma_device_type": 2 00:20:26.490 }, 00:20:26.490 { 00:20:26.490 "dma_device_id": "system", 00:20:26.490 "dma_device_type": 1 00:20:26.490 }, 00:20:26.490 { 00:20:26.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.490 "dma_device_type": 2 00:20:26.490 }, 00:20:26.490 { 00:20:26.490 "dma_device_id": "system", 00:20:26.490 "dma_device_type": 1 00:20:26.490 }, 00:20:26.490 { 00:20:26.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.490 "dma_device_type": 2 00:20:26.490 } 00:20:26.490 ], 00:20:26.490 "driver_specific": { 00:20:26.490 "raid": { 00:20:26.490 "uuid": "f2c1846a-3b34-480a-a608-21f56080565a", 00:20:26.490 "strip_size_kb": 64, 00:20:26.490 "state": "online", 00:20:26.490 "raid_level": "concat", 00:20:26.490 "superblock": true, 00:20:26.490 "num_base_bdevs": 4, 00:20:26.490 "num_base_bdevs_discovered": 4, 00:20:26.490 "num_base_bdevs_operational": 4, 00:20:26.490 "base_bdevs_list": [ 00:20:26.490 { 00:20:26.490 "name": "pt1", 00:20:26.490 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:26.490 "is_configured": true, 00:20:26.490 "data_offset": 2048, 00:20:26.490 "data_size": 63488 00:20:26.490 }, 00:20:26.490 { 00:20:26.490 "name": "pt2", 00:20:26.490 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:26.490 "is_configured": true, 00:20:26.490 "data_offset": 2048, 00:20:26.490 "data_size": 63488 00:20:26.490 }, 00:20:26.490 { 00:20:26.490 "name": "pt3", 00:20:26.490 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:26.490 "is_configured": true, 00:20:26.490 "data_offset": 2048, 00:20:26.490 "data_size": 63488 00:20:26.490 }, 00:20:26.490 { 00:20:26.490 "name": "pt4", 00:20:26.490 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:26.490 "is_configured": true, 00:20:26.490 "data_offset": 2048, 00:20:26.490 "data_size": 63488 00:20:26.490 } 00:20:26.490 ] 00:20:26.490 } 00:20:26.490 } 00:20:26.490 }' 00:20:26.490 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:26.490 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:26.490 pt2 00:20:26.490 pt3 00:20:26.490 pt4' 00:20:26.490 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:26.490 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:26.490 13:20:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:26.749 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:26.749 "name": "pt1", 00:20:26.749 "aliases": [ 00:20:26.749 "00000000-0000-0000-0000-000000000001" 00:20:26.749 ], 00:20:26.749 "product_name": "passthru", 00:20:26.749 "block_size": 512, 00:20:26.749 "num_blocks": 65536, 00:20:26.749 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:26.749 "assigned_rate_limits": { 00:20:26.749 "rw_ios_per_sec": 0, 00:20:26.749 "rw_mbytes_per_sec": 0, 00:20:26.749 "r_mbytes_per_sec": 0, 00:20:26.749 "w_mbytes_per_sec": 0 00:20:26.749 }, 00:20:26.749 "claimed": true, 00:20:26.749 "claim_type": "exclusive_write", 00:20:26.749 "zoned": false, 00:20:26.749 "supported_io_types": { 00:20:26.749 "read": true, 00:20:26.749 "write": true, 00:20:26.749 "unmap": true, 00:20:26.749 "flush": true, 00:20:26.749 "reset": true, 00:20:26.749 "nvme_admin": false, 00:20:26.749 "nvme_io": false, 00:20:26.749 "nvme_io_md": false, 00:20:26.749 "write_zeroes": true, 00:20:26.749 "zcopy": true, 00:20:26.749 "get_zone_info": false, 00:20:26.749 "zone_management": false, 00:20:26.749 "zone_append": false, 00:20:26.749 "compare": false, 00:20:26.749 "compare_and_write": false, 00:20:26.749 "abort": true, 00:20:26.749 "seek_hole": false, 00:20:26.749 "seek_data": false, 00:20:26.749 "copy": true, 00:20:26.749 "nvme_iov_md": false 00:20:26.749 }, 00:20:26.749 "memory_domains": [ 00:20:26.749 { 00:20:26.749 "dma_device_id": "system", 00:20:26.749 "dma_device_type": 1 00:20:26.749 }, 00:20:26.749 { 00:20:26.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.749 "dma_device_type": 2 00:20:26.749 } 00:20:26.749 ], 00:20:26.749 "driver_specific": { 00:20:26.749 "passthru": { 00:20:26.749 "name": "pt1", 00:20:26.749 "base_bdev_name": "malloc1" 00:20:26.749 } 00:20:26.749 } 00:20:26.749 }' 00:20:26.749 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.749 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:26.749 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:26.749 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.008 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.008 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:27.008 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.008 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.008 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:27.008 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.008 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.008 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.008 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:27.008 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:27.008 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:27.267 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:27.267 "name": "pt2", 00:20:27.267 "aliases": [ 00:20:27.267 "00000000-0000-0000-0000-000000000002" 00:20:27.267 ], 00:20:27.267 "product_name": "passthru", 00:20:27.267 "block_size": 512, 00:20:27.267 "num_blocks": 65536, 00:20:27.267 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:27.267 "assigned_rate_limits": { 00:20:27.267 "rw_ios_per_sec": 0, 00:20:27.267 "rw_mbytes_per_sec": 0, 00:20:27.267 "r_mbytes_per_sec": 0, 00:20:27.267 "w_mbytes_per_sec": 0 00:20:27.267 }, 00:20:27.267 "claimed": true, 00:20:27.267 "claim_type": "exclusive_write", 00:20:27.267 "zoned": false, 00:20:27.267 "supported_io_types": { 00:20:27.267 "read": true, 00:20:27.267 "write": true, 00:20:27.267 "unmap": true, 00:20:27.267 "flush": true, 00:20:27.267 "reset": true, 00:20:27.267 "nvme_admin": false, 00:20:27.267 "nvme_io": false, 00:20:27.267 "nvme_io_md": false, 00:20:27.267 "write_zeroes": true, 00:20:27.267 "zcopy": true, 00:20:27.267 "get_zone_info": false, 00:20:27.267 "zone_management": false, 00:20:27.267 "zone_append": false, 00:20:27.267 "compare": false, 00:20:27.267 "compare_and_write": false, 00:20:27.267 "abort": true, 00:20:27.267 "seek_hole": false, 00:20:27.267 "seek_data": false, 00:20:27.267 "copy": true, 00:20:27.267 "nvme_iov_md": false 00:20:27.267 }, 00:20:27.267 "memory_domains": [ 00:20:27.267 { 00:20:27.267 "dma_device_id": "system", 00:20:27.267 "dma_device_type": 1 00:20:27.267 }, 00:20:27.267 { 00:20:27.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.267 "dma_device_type": 2 00:20:27.267 } 00:20:27.267 ], 00:20:27.267 "driver_specific": { 00:20:27.267 "passthru": { 00:20:27.267 "name": "pt2", 00:20:27.267 "base_bdev_name": "malloc2" 00:20:27.267 } 00:20:27.267 } 00:20:27.267 }' 00:20:27.267 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.267 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.527 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:27.527 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.527 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.527 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:27.527 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.527 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.527 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:27.527 13:20:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.527 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.785 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.785 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:27.785 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:27.785 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:27.785 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:27.785 "name": "pt3", 00:20:27.785 "aliases": [ 00:20:27.785 "00000000-0000-0000-0000-000000000003" 00:20:27.785 ], 00:20:27.785 "product_name": "passthru", 00:20:27.785 "block_size": 512, 00:20:27.785 "num_blocks": 65536, 00:20:27.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:27.785 "assigned_rate_limits": { 00:20:27.785 "rw_ios_per_sec": 0, 00:20:27.785 "rw_mbytes_per_sec": 0, 00:20:27.785 "r_mbytes_per_sec": 0, 00:20:27.785 "w_mbytes_per_sec": 0 00:20:27.785 }, 00:20:27.785 "claimed": true, 00:20:27.785 "claim_type": "exclusive_write", 00:20:27.785 "zoned": false, 00:20:27.785 "supported_io_types": { 00:20:27.785 "read": true, 00:20:27.785 "write": true, 00:20:27.785 "unmap": true, 00:20:27.785 "flush": true, 00:20:27.785 "reset": true, 00:20:27.785 "nvme_admin": false, 00:20:27.785 "nvme_io": false, 00:20:27.785 "nvme_io_md": false, 00:20:27.785 "write_zeroes": true, 00:20:27.785 "zcopy": true, 00:20:27.785 "get_zone_info": false, 00:20:27.785 "zone_management": false, 00:20:27.785 "zone_append": false, 00:20:27.785 "compare": false, 00:20:27.785 "compare_and_write": false, 00:20:27.785 "abort": true, 00:20:27.785 "seek_hole": false, 00:20:27.785 "seek_data": false, 00:20:27.785 "copy": true, 00:20:27.785 "nvme_iov_md": false 00:20:27.785 }, 00:20:27.785 "memory_domains": [ 00:20:27.785 { 00:20:27.785 "dma_device_id": "system", 00:20:27.785 "dma_device_type": 1 00:20:27.785 }, 00:20:27.785 { 00:20:27.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.785 "dma_device_type": 2 00:20:27.785 } 00:20:27.785 ], 00:20:27.785 "driver_specific": { 00:20:27.785 "passthru": { 00:20:27.785 "name": "pt3", 00:20:27.785 "base_bdev_name": "malloc3" 00:20:27.785 } 00:20:27.785 } 00:20:27.785 }' 00:20:27.785 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.044 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.044 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:28.044 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.044 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.044 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:28.044 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.044 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.303 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:28.303 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.303 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.303 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:28.303 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:28.303 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:28.304 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:28.562 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:28.562 "name": "pt4", 00:20:28.562 "aliases": [ 00:20:28.562 "00000000-0000-0000-0000-000000000004" 00:20:28.562 ], 00:20:28.562 "product_name": "passthru", 00:20:28.562 "block_size": 512, 00:20:28.562 "num_blocks": 65536, 00:20:28.562 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:28.562 "assigned_rate_limits": { 00:20:28.562 "rw_ios_per_sec": 0, 00:20:28.562 "rw_mbytes_per_sec": 0, 00:20:28.562 "r_mbytes_per_sec": 0, 00:20:28.562 "w_mbytes_per_sec": 0 00:20:28.562 }, 00:20:28.562 "claimed": true, 00:20:28.562 "claim_type": "exclusive_write", 00:20:28.562 "zoned": false, 00:20:28.562 "supported_io_types": { 00:20:28.562 "read": true, 00:20:28.562 "write": true, 00:20:28.562 "unmap": true, 00:20:28.562 "flush": true, 00:20:28.562 "reset": true, 00:20:28.562 "nvme_admin": false, 00:20:28.562 "nvme_io": false, 00:20:28.562 "nvme_io_md": false, 00:20:28.562 "write_zeroes": true, 00:20:28.562 "zcopy": true, 00:20:28.562 "get_zone_info": false, 00:20:28.563 "zone_management": false, 00:20:28.563 "zone_append": false, 00:20:28.563 "compare": false, 00:20:28.563 "compare_and_write": false, 00:20:28.563 "abort": true, 00:20:28.563 "seek_hole": false, 00:20:28.563 "seek_data": false, 00:20:28.563 "copy": true, 00:20:28.563 "nvme_iov_md": false 00:20:28.563 }, 00:20:28.563 "memory_domains": [ 00:20:28.563 { 00:20:28.563 "dma_device_id": "system", 00:20:28.563 "dma_device_type": 1 00:20:28.563 }, 00:20:28.563 { 00:20:28.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.563 "dma_device_type": 2 00:20:28.563 } 00:20:28.563 ], 00:20:28.563 "driver_specific": { 00:20:28.563 "passthru": { 00:20:28.563 "name": "pt4", 00:20:28.563 "base_bdev_name": "malloc4" 00:20:28.563 } 00:20:28.563 } 00:20:28.563 }' 00:20:28.563 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.563 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.563 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:28.563 13:20:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.563 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.563 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:28.563 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.563 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.821 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:28.821 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.821 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.821 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:28.821 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:20:28.821 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:29.081 [2024-07-26 13:20:09.426460] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:29.081 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=f2c1846a-3b34-480a-a608-21f56080565a 00:20:29.081 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z f2c1846a-3b34-480a-a608-21f56080565a ']' 00:20:29.081 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:29.340 [2024-07-26 13:20:09.654741] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:29.340 [2024-07-26 13:20:09.654760] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:29.340 [2024-07-26 13:20:09.654808] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:29.340 [2024-07-26 13:20:09.654867] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:29.340 [2024-07-26 13:20:09.654878] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bfd560 name raid_bdev1, state offline 00:20:29.340 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.340 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:20:29.599 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:20:29.599 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:20:29.599 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:29.599 13:20:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:29.858 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:29.858 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:29.858 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:29.858 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:30.117 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:20:30.117 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:30.376 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:30.376 13:20:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:30.635 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:20:30.635 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:30.635 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:20:30.635 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:30.636 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:30.636 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.636 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:30.636 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.636 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:30.636 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:30.636 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:20:30.636 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:20:30.636 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:30.895 [2024-07-26 13:20:11.250882] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:30.895 [2024-07-26 13:20:11.252137] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:30.895 [2024-07-26 13:20:11.252187] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:30.895 [2024-07-26 13:20:11.252218] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:30.895 [2024-07-26 13:20:11.252261] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:30.895 [2024-07-26 13:20:11.252297] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:30.895 [2024-07-26 13:20:11.252319] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:30.895 [2024-07-26 13:20:11.252339] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:30.895 [2024-07-26 13:20:11.252356] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:30.895 [2024-07-26 13:20:11.252365] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1da73f0 name raid_bdev1, state configuring 00:20:30.895 request: 00:20:30.895 { 00:20:30.895 "name": "raid_bdev1", 00:20:30.895 "raid_level": "concat", 00:20:30.895 "base_bdevs": [ 00:20:30.895 "malloc1", 00:20:30.895 "malloc2", 00:20:30.895 "malloc3", 00:20:30.895 "malloc4" 00:20:30.895 ], 00:20:30.895 "strip_size_kb": 64, 00:20:30.895 "superblock": false, 00:20:30.895 "method": "bdev_raid_create", 00:20:30.895 "req_id": 1 00:20:30.895 } 00:20:30.895 Got JSON-RPC error response 00:20:30.895 response: 00:20:30.895 { 00:20:30.895 "code": -17, 00:20:30.895 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:30.895 } 00:20:30.895 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:20:30.895 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:30.895 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:30.895 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:30.895 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.895 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:20:31.154 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:20:31.154 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:20:31.154 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:31.478 [2024-07-26 13:20:11.691978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:31.478 [2024-07-26 13:20:11.692019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.478 [2024-07-26 13:20:11.692037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1da7d50 00:20:31.478 [2024-07-26 13:20:11.692049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.478 [2024-07-26 13:20:11.693518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.478 [2024-07-26 13:20:11.693545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:31.478 [2024-07-26 13:20:11.693603] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:31.478 [2024-07-26 13:20:11.693628] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:31.478 pt1 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:31.478 "name": "raid_bdev1", 00:20:31.478 "uuid": "f2c1846a-3b34-480a-a608-21f56080565a", 00:20:31.478 "strip_size_kb": 64, 00:20:31.478 "state": "configuring", 00:20:31.478 "raid_level": "concat", 00:20:31.478 "superblock": true, 00:20:31.478 "num_base_bdevs": 4, 00:20:31.478 "num_base_bdevs_discovered": 1, 00:20:31.478 "num_base_bdevs_operational": 4, 00:20:31.478 "base_bdevs_list": [ 00:20:31.478 { 00:20:31.478 "name": "pt1", 00:20:31.478 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:31.478 "is_configured": true, 00:20:31.478 "data_offset": 2048, 00:20:31.478 "data_size": 63488 00:20:31.478 }, 00:20:31.478 { 00:20:31.478 "name": null, 00:20:31.478 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:31.478 "is_configured": false, 00:20:31.478 "data_offset": 2048, 00:20:31.478 "data_size": 63488 00:20:31.478 }, 00:20:31.478 { 00:20:31.478 "name": null, 00:20:31.478 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:31.478 "is_configured": false, 00:20:31.478 "data_offset": 2048, 00:20:31.478 "data_size": 63488 00:20:31.478 }, 00:20:31.478 { 00:20:31.478 "name": null, 00:20:31.478 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:31.478 "is_configured": false, 00:20:31.478 "data_offset": 2048, 00:20:31.478 "data_size": 63488 00:20:31.478 } 00:20:31.478 ] 00:20:31.478 }' 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:31.478 13:20:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.047 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:20:32.048 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:32.307 [2024-07-26 13:20:12.734737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:32.307 [2024-07-26 13:20:12.734781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.307 [2024-07-26 13:20:12.734800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c04790 00:20:32.307 [2024-07-26 13:20:12.734812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.307 [2024-07-26 13:20:12.735118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.307 [2024-07-26 13:20:12.735136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:32.307 [2024-07-26 13:20:12.735202] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:32.307 [2024-07-26 13:20:12.735221] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:32.307 pt2 00:20:32.307 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:32.566 [2024-07-26 13:20:12.963351] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:32.566 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:20:32.566 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:32.566 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:32.566 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:32.566 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:32.566 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:32.566 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:32.566 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:32.566 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:32.566 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:32.566 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.566 13:20:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.825 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.825 "name": "raid_bdev1", 00:20:32.825 "uuid": "f2c1846a-3b34-480a-a608-21f56080565a", 00:20:32.825 "strip_size_kb": 64, 00:20:32.825 "state": "configuring", 00:20:32.825 "raid_level": "concat", 00:20:32.825 "superblock": true, 00:20:32.825 "num_base_bdevs": 4, 00:20:32.825 "num_base_bdevs_discovered": 1, 00:20:32.825 "num_base_bdevs_operational": 4, 00:20:32.825 "base_bdevs_list": [ 00:20:32.825 { 00:20:32.825 "name": "pt1", 00:20:32.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:32.825 "is_configured": true, 00:20:32.825 "data_offset": 2048, 00:20:32.825 "data_size": 63488 00:20:32.825 }, 00:20:32.825 { 00:20:32.825 "name": null, 00:20:32.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:32.825 "is_configured": false, 00:20:32.825 "data_offset": 2048, 00:20:32.825 "data_size": 63488 00:20:32.825 }, 00:20:32.825 { 00:20:32.825 "name": null, 00:20:32.825 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:32.825 "is_configured": false, 00:20:32.825 "data_offset": 2048, 00:20:32.825 "data_size": 63488 00:20:32.825 }, 00:20:32.825 { 00:20:32.825 "name": null, 00:20:32.825 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:32.826 "is_configured": false, 00:20:32.826 "data_offset": 2048, 00:20:32.826 "data_size": 63488 00:20:32.826 } 00:20:32.826 ] 00:20:32.826 }' 00:20:32.826 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.826 13:20:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.393 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:20:33.393 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:33.393 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:33.652 [2024-07-26 13:20:13.982026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:33.652 [2024-07-26 13:20:13.982076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.652 [2024-07-26 13:20:13.982094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bfea70 00:20:33.652 [2024-07-26 13:20:13.982105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.652 [2024-07-26 13:20:13.982462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.652 [2024-07-26 13:20:13.982483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:33.652 [2024-07-26 13:20:13.982542] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:33.652 [2024-07-26 13:20:13.982561] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:33.652 pt2 00:20:33.652 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:33.652 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:33.652 13:20:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:33.911 [2024-07-26 13:20:14.210625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:33.911 [2024-07-26 13:20:14.210662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.912 [2024-07-26 13:20:14.210677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bfc020 00:20:33.912 [2024-07-26 13:20:14.210688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.912 [2024-07-26 13:20:14.210972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.912 [2024-07-26 13:20:14.210989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:33.912 [2024-07-26 13:20:14.211046] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:33.912 [2024-07-26 13:20:14.211063] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:33.912 pt3 00:20:33.912 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:33.912 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:33.912 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:33.912 [2024-07-26 13:20:14.435225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:33.912 [2024-07-26 13:20:14.435263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:33.912 [2024-07-26 13:20:14.435280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d9d8b0 00:20:33.912 [2024-07-26 13:20:14.435291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:33.912 [2024-07-26 13:20:14.435582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:33.912 [2024-07-26 13:20:14.435598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:33.912 [2024-07-26 13:20:14.435649] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:33.912 [2024-07-26 13:20:14.435667] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:33.912 [2024-07-26 13:20:14.435779] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bfdc20 00:20:33.912 [2024-07-26 13:20:14.435789] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:33.912 [2024-07-26 13:20:14.435942] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bfb590 00:20:33.912 [2024-07-26 13:20:14.436058] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bfdc20 00:20:33.912 [2024-07-26 13:20:14.436066] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1bfdc20 00:20:33.912 [2024-07-26 13:20:14.436163] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.171 pt4 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:34.171 "name": "raid_bdev1", 00:20:34.171 "uuid": "f2c1846a-3b34-480a-a608-21f56080565a", 00:20:34.171 "strip_size_kb": 64, 00:20:34.171 "state": "online", 00:20:34.171 "raid_level": "concat", 00:20:34.171 "superblock": true, 00:20:34.171 "num_base_bdevs": 4, 00:20:34.171 "num_base_bdevs_discovered": 4, 00:20:34.171 "num_base_bdevs_operational": 4, 00:20:34.171 "base_bdevs_list": [ 00:20:34.171 { 00:20:34.171 "name": "pt1", 00:20:34.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:34.171 "is_configured": true, 00:20:34.171 "data_offset": 2048, 00:20:34.171 "data_size": 63488 00:20:34.171 }, 00:20:34.171 { 00:20:34.171 "name": "pt2", 00:20:34.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:34.171 "is_configured": true, 00:20:34.171 "data_offset": 2048, 00:20:34.171 "data_size": 63488 00:20:34.171 }, 00:20:34.171 { 00:20:34.171 "name": "pt3", 00:20:34.171 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:34.171 "is_configured": true, 00:20:34.171 "data_offset": 2048, 00:20:34.171 "data_size": 63488 00:20:34.171 }, 00:20:34.171 { 00:20:34.171 "name": "pt4", 00:20:34.171 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:34.171 "is_configured": true, 00:20:34.171 "data_offset": 2048, 00:20:34.171 "data_size": 63488 00:20:34.171 } 00:20:34.171 ] 00:20:34.171 }' 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:34.171 13:20:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.739 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:20:34.739 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:34.739 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:34.739 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:34.739 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:34.739 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:34.739 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:34.739 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:34.998 [2024-07-26 13:20:15.458223] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.998 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:34.998 "name": "raid_bdev1", 00:20:34.998 "aliases": [ 00:20:34.998 "f2c1846a-3b34-480a-a608-21f56080565a" 00:20:34.998 ], 00:20:34.998 "product_name": "Raid Volume", 00:20:34.998 "block_size": 512, 00:20:34.998 "num_blocks": 253952, 00:20:34.998 "uuid": "f2c1846a-3b34-480a-a608-21f56080565a", 00:20:34.998 "assigned_rate_limits": { 00:20:34.998 "rw_ios_per_sec": 0, 00:20:34.998 "rw_mbytes_per_sec": 0, 00:20:34.998 "r_mbytes_per_sec": 0, 00:20:34.998 "w_mbytes_per_sec": 0 00:20:34.998 }, 00:20:34.998 "claimed": false, 00:20:34.998 "zoned": false, 00:20:34.998 "supported_io_types": { 00:20:34.998 "read": true, 00:20:34.998 "write": true, 00:20:34.998 "unmap": true, 00:20:34.998 "flush": true, 00:20:34.998 "reset": true, 00:20:34.998 "nvme_admin": false, 00:20:34.998 "nvme_io": false, 00:20:34.998 "nvme_io_md": false, 00:20:34.998 "write_zeroes": true, 00:20:34.998 "zcopy": false, 00:20:34.998 "get_zone_info": false, 00:20:34.998 "zone_management": false, 00:20:34.998 "zone_append": false, 00:20:34.998 "compare": false, 00:20:34.998 "compare_and_write": false, 00:20:34.998 "abort": false, 00:20:34.998 "seek_hole": false, 00:20:34.998 "seek_data": false, 00:20:34.998 "copy": false, 00:20:34.998 "nvme_iov_md": false 00:20:34.998 }, 00:20:34.998 "memory_domains": [ 00:20:34.998 { 00:20:34.998 "dma_device_id": "system", 00:20:34.998 "dma_device_type": 1 00:20:34.998 }, 00:20:34.998 { 00:20:34.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.998 "dma_device_type": 2 00:20:34.998 }, 00:20:34.998 { 00:20:34.998 "dma_device_id": "system", 00:20:34.998 "dma_device_type": 1 00:20:34.998 }, 00:20:34.998 { 00:20:34.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.998 "dma_device_type": 2 00:20:34.998 }, 00:20:34.998 { 00:20:34.998 "dma_device_id": "system", 00:20:34.998 "dma_device_type": 1 00:20:34.998 }, 00:20:34.998 { 00:20:34.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.998 "dma_device_type": 2 00:20:34.998 }, 00:20:34.998 { 00:20:34.998 "dma_device_id": "system", 00:20:34.998 "dma_device_type": 1 00:20:34.998 }, 00:20:34.998 { 00:20:34.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.998 "dma_device_type": 2 00:20:34.998 } 00:20:34.998 ], 00:20:34.998 "driver_specific": { 00:20:34.998 "raid": { 00:20:34.998 "uuid": "f2c1846a-3b34-480a-a608-21f56080565a", 00:20:34.998 "strip_size_kb": 64, 00:20:34.998 "state": "online", 00:20:34.998 "raid_level": "concat", 00:20:34.998 "superblock": true, 00:20:34.998 "num_base_bdevs": 4, 00:20:34.998 "num_base_bdevs_discovered": 4, 00:20:34.998 "num_base_bdevs_operational": 4, 00:20:34.998 "base_bdevs_list": [ 00:20:34.998 { 00:20:34.999 "name": "pt1", 00:20:34.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:34.999 "is_configured": true, 00:20:34.999 "data_offset": 2048, 00:20:34.999 "data_size": 63488 00:20:34.999 }, 00:20:34.999 { 00:20:34.999 "name": "pt2", 00:20:34.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:34.999 "is_configured": true, 00:20:34.999 "data_offset": 2048, 00:20:34.999 "data_size": 63488 00:20:34.999 }, 00:20:34.999 { 00:20:34.999 "name": "pt3", 00:20:34.999 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:34.999 "is_configured": true, 00:20:34.999 "data_offset": 2048, 00:20:34.999 "data_size": 63488 00:20:34.999 }, 00:20:34.999 { 00:20:34.999 "name": "pt4", 00:20:34.999 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:34.999 "is_configured": true, 00:20:34.999 "data_offset": 2048, 00:20:34.999 "data_size": 63488 00:20:34.999 } 00:20:34.999 ] 00:20:34.999 } 00:20:34.999 } 00:20:34.999 }' 00:20:34.999 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:34.999 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:34.999 pt2 00:20:34.999 pt3 00:20:34.999 pt4' 00:20:34.999 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:35.258 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:35.258 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:35.258 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:35.258 "name": "pt1", 00:20:35.258 "aliases": [ 00:20:35.258 "00000000-0000-0000-0000-000000000001" 00:20:35.258 ], 00:20:35.258 "product_name": "passthru", 00:20:35.258 "block_size": 512, 00:20:35.258 "num_blocks": 65536, 00:20:35.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:35.258 "assigned_rate_limits": { 00:20:35.258 "rw_ios_per_sec": 0, 00:20:35.258 "rw_mbytes_per_sec": 0, 00:20:35.258 "r_mbytes_per_sec": 0, 00:20:35.258 "w_mbytes_per_sec": 0 00:20:35.258 }, 00:20:35.258 "claimed": true, 00:20:35.258 "claim_type": "exclusive_write", 00:20:35.258 "zoned": false, 00:20:35.258 "supported_io_types": { 00:20:35.258 "read": true, 00:20:35.258 "write": true, 00:20:35.258 "unmap": true, 00:20:35.258 "flush": true, 00:20:35.258 "reset": true, 00:20:35.258 "nvme_admin": false, 00:20:35.258 "nvme_io": false, 00:20:35.258 "nvme_io_md": false, 00:20:35.258 "write_zeroes": true, 00:20:35.258 "zcopy": true, 00:20:35.258 "get_zone_info": false, 00:20:35.258 "zone_management": false, 00:20:35.258 "zone_append": false, 00:20:35.258 "compare": false, 00:20:35.258 "compare_and_write": false, 00:20:35.258 "abort": true, 00:20:35.258 "seek_hole": false, 00:20:35.258 "seek_data": false, 00:20:35.258 "copy": true, 00:20:35.258 "nvme_iov_md": false 00:20:35.258 }, 00:20:35.258 "memory_domains": [ 00:20:35.258 { 00:20:35.258 "dma_device_id": "system", 00:20:35.258 "dma_device_type": 1 00:20:35.258 }, 00:20:35.258 { 00:20:35.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.258 "dma_device_type": 2 00:20:35.258 } 00:20:35.258 ], 00:20:35.258 "driver_specific": { 00:20:35.258 "passthru": { 00:20:35.258 "name": "pt1", 00:20:35.258 "base_bdev_name": "malloc1" 00:20:35.258 } 00:20:35.258 } 00:20:35.258 }' 00:20:35.258 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.517 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.517 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:35.517 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.517 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.517 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:35.517 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.517 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.517 13:20:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:35.517 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.776 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.776 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:35.776 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:35.776 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:35.776 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:36.035 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:36.035 "name": "pt2", 00:20:36.035 "aliases": [ 00:20:36.035 "00000000-0000-0000-0000-000000000002" 00:20:36.035 ], 00:20:36.035 "product_name": "passthru", 00:20:36.035 "block_size": 512, 00:20:36.035 "num_blocks": 65536, 00:20:36.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.035 "assigned_rate_limits": { 00:20:36.035 "rw_ios_per_sec": 0, 00:20:36.035 "rw_mbytes_per_sec": 0, 00:20:36.035 "r_mbytes_per_sec": 0, 00:20:36.035 "w_mbytes_per_sec": 0 00:20:36.035 }, 00:20:36.035 "claimed": true, 00:20:36.035 "claim_type": "exclusive_write", 00:20:36.035 "zoned": false, 00:20:36.035 "supported_io_types": { 00:20:36.035 "read": true, 00:20:36.035 "write": true, 00:20:36.035 "unmap": true, 00:20:36.035 "flush": true, 00:20:36.035 "reset": true, 00:20:36.035 "nvme_admin": false, 00:20:36.035 "nvme_io": false, 00:20:36.035 "nvme_io_md": false, 00:20:36.035 "write_zeroes": true, 00:20:36.035 "zcopy": true, 00:20:36.035 "get_zone_info": false, 00:20:36.035 "zone_management": false, 00:20:36.035 "zone_append": false, 00:20:36.035 "compare": false, 00:20:36.035 "compare_and_write": false, 00:20:36.035 "abort": true, 00:20:36.035 "seek_hole": false, 00:20:36.035 "seek_data": false, 00:20:36.035 "copy": true, 00:20:36.035 "nvme_iov_md": false 00:20:36.035 }, 00:20:36.035 "memory_domains": [ 00:20:36.035 { 00:20:36.035 "dma_device_id": "system", 00:20:36.035 "dma_device_type": 1 00:20:36.035 }, 00:20:36.035 { 00:20:36.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.035 "dma_device_type": 2 00:20:36.035 } 00:20:36.035 ], 00:20:36.035 "driver_specific": { 00:20:36.035 "passthru": { 00:20:36.035 "name": "pt2", 00:20:36.035 "base_bdev_name": "malloc2" 00:20:36.035 } 00:20:36.035 } 00:20:36.035 }' 00:20:36.035 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.035 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.035 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:36.035 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.035 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.035 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:36.035 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.035 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.295 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.295 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.295 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.295 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.295 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:36.295 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:36.295 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:36.554 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:36.554 "name": "pt3", 00:20:36.554 "aliases": [ 00:20:36.554 "00000000-0000-0000-0000-000000000003" 00:20:36.554 ], 00:20:36.554 "product_name": "passthru", 00:20:36.554 "block_size": 512, 00:20:36.554 "num_blocks": 65536, 00:20:36.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:36.554 "assigned_rate_limits": { 00:20:36.554 "rw_ios_per_sec": 0, 00:20:36.554 "rw_mbytes_per_sec": 0, 00:20:36.554 "r_mbytes_per_sec": 0, 00:20:36.554 "w_mbytes_per_sec": 0 00:20:36.554 }, 00:20:36.554 "claimed": true, 00:20:36.554 "claim_type": "exclusive_write", 00:20:36.554 "zoned": false, 00:20:36.554 "supported_io_types": { 00:20:36.554 "read": true, 00:20:36.554 "write": true, 00:20:36.554 "unmap": true, 00:20:36.554 "flush": true, 00:20:36.554 "reset": true, 00:20:36.554 "nvme_admin": false, 00:20:36.554 "nvme_io": false, 00:20:36.554 "nvme_io_md": false, 00:20:36.554 "write_zeroes": true, 00:20:36.554 "zcopy": true, 00:20:36.554 "get_zone_info": false, 00:20:36.554 "zone_management": false, 00:20:36.554 "zone_append": false, 00:20:36.554 "compare": false, 00:20:36.554 "compare_and_write": false, 00:20:36.554 "abort": true, 00:20:36.554 "seek_hole": false, 00:20:36.554 "seek_data": false, 00:20:36.554 "copy": true, 00:20:36.554 "nvme_iov_md": false 00:20:36.554 }, 00:20:36.554 "memory_domains": [ 00:20:36.554 { 00:20:36.554 "dma_device_id": "system", 00:20:36.554 "dma_device_type": 1 00:20:36.554 }, 00:20:36.554 { 00:20:36.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.554 "dma_device_type": 2 00:20:36.554 } 00:20:36.554 ], 00:20:36.554 "driver_specific": { 00:20:36.554 "passthru": { 00:20:36.554 "name": "pt3", 00:20:36.554 "base_bdev_name": "malloc3" 00:20:36.554 } 00:20:36.554 } 00:20:36.554 }' 00:20:36.554 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.554 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.554 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:36.554 13:20:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.554 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.554 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:36.554 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.951 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.951 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.951 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.951 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.951 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.951 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:36.951 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:20:36.951 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:36.951 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:36.951 "name": "pt4", 00:20:36.951 "aliases": [ 00:20:36.951 "00000000-0000-0000-0000-000000000004" 00:20:36.951 ], 00:20:36.951 "product_name": "passthru", 00:20:36.951 "block_size": 512, 00:20:36.951 "num_blocks": 65536, 00:20:36.951 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:36.951 "assigned_rate_limits": { 00:20:36.951 "rw_ios_per_sec": 0, 00:20:36.951 "rw_mbytes_per_sec": 0, 00:20:36.951 "r_mbytes_per_sec": 0, 00:20:36.951 "w_mbytes_per_sec": 0 00:20:36.951 }, 00:20:36.951 "claimed": true, 00:20:36.951 "claim_type": "exclusive_write", 00:20:36.951 "zoned": false, 00:20:36.951 "supported_io_types": { 00:20:36.951 "read": true, 00:20:36.951 "write": true, 00:20:36.951 "unmap": true, 00:20:36.951 "flush": true, 00:20:36.951 "reset": true, 00:20:36.951 "nvme_admin": false, 00:20:36.951 "nvme_io": false, 00:20:36.951 "nvme_io_md": false, 00:20:36.951 "write_zeroes": true, 00:20:36.951 "zcopy": true, 00:20:36.951 "get_zone_info": false, 00:20:36.951 "zone_management": false, 00:20:36.951 "zone_append": false, 00:20:36.951 "compare": false, 00:20:36.951 "compare_and_write": false, 00:20:36.951 "abort": true, 00:20:36.951 "seek_hole": false, 00:20:36.951 "seek_data": false, 00:20:36.951 "copy": true, 00:20:36.951 "nvme_iov_md": false 00:20:36.951 }, 00:20:36.951 "memory_domains": [ 00:20:36.951 { 00:20:36.951 "dma_device_id": "system", 00:20:36.951 "dma_device_type": 1 00:20:36.951 }, 00:20:36.951 { 00:20:36.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.951 "dma_device_type": 2 00:20:36.951 } 00:20:36.951 ], 00:20:36.951 "driver_specific": { 00:20:36.951 "passthru": { 00:20:36.951 "name": "pt4", 00:20:36.951 "base_bdev_name": "malloc4" 00:20:36.951 } 00:20:36.951 } 00:20:36.951 }' 00:20:36.951 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:37.210 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:37.210 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:37.210 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:37.210 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:37.210 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:37.210 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:37.210 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:37.210 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:37.210 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:37.469 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:37.469 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:37.469 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:37.469 13:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:20:37.730 [2024-07-26 13:20:18.008910] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' f2c1846a-3b34-480a-a608-21f56080565a '!=' f2c1846a-3b34-480a-a608-21f56080565a ']' 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 756630 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 756630 ']' 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 756630 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 756630 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 756630' 00:20:37.730 killing process with pid 756630 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 756630 00:20:37.730 [2024-07-26 13:20:18.085655] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:37.730 [2024-07-26 13:20:18.085712] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.730 [2024-07-26 13:20:18.085772] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.730 [2024-07-26 13:20:18.085783] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bfdc20 name raid_bdev1, state offline 00:20:37.730 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 756630 00:20:37.730 [2024-07-26 13:20:18.117484] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:37.989 13:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:20:37.989 00:20:37.989 real 0m15.482s 00:20:37.989 user 0m27.938s 00:20:37.989 sys 0m2.737s 00:20:37.989 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:37.989 13:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.989 ************************************ 00:20:37.989 END TEST raid_superblock_test 00:20:37.989 ************************************ 00:20:37.989 13:20:18 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:20:37.989 13:20:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:37.989 13:20:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:37.989 13:20:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:37.989 ************************************ 00:20:37.989 START TEST raid_read_error_test 00:20:37.989 ************************************ 00:20:37.989 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:20:37.989 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:20:37.989 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.NA12bay126 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=759597 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 759597 /var/tmp/spdk-raid.sock 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 759597 ']' 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:37.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.990 13:20:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.990 [2024-07-26 13:20:18.462315] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:20:37.990 [2024-07-26 13:20:18.462375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid759597 ] 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:38.250 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:38.250 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:38.250 [2024-07-26 13:20:18.593209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.250 [2024-07-26 13:20:18.679109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.250 [2024-07-26 13:20:18.734455] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:38.250 [2024-07-26 13:20:18.734486] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:39.188 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:39.188 13:20:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:20:39.188 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:39.188 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:39.188 BaseBdev1_malloc 00:20:39.188 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:39.447 true 00:20:39.447 13:20:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:39.706 [2024-07-26 13:20:19.994909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:39.706 [2024-07-26 13:20:19.994948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.706 [2024-07-26 13:20:19.994967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fab190 00:20:39.706 [2024-07-26 13:20:19.994979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.706 [2024-07-26 13:20:19.996529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.706 [2024-07-26 13:20:19.996557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:39.706 BaseBdev1 00:20:39.706 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:39.706 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:39.706 BaseBdev2_malloc 00:20:39.706 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:39.966 true 00:20:39.966 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:40.225 [2024-07-26 13:20:20.644997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:40.225 [2024-07-26 13:20:20.645033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.225 [2024-07-26 13:20:20.645050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fafe20 00:20:40.225 [2024-07-26 13:20:20.645062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.225 [2024-07-26 13:20:20.646386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.225 [2024-07-26 13:20:20.646413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:40.225 BaseBdev2 00:20:40.225 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:40.225 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:40.484 BaseBdev3_malloc 00:20:40.484 13:20:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:40.743 true 00:20:40.743 13:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:41.002 [2024-07-26 13:20:21.331050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:41.002 [2024-07-26 13:20:21.331090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.002 [2024-07-26 13:20:21.331109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fb0d90 00:20:41.002 [2024-07-26 13:20:21.331120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.002 [2024-07-26 13:20:21.332501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.002 [2024-07-26 13:20:21.332528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:41.002 BaseBdev3 00:20:41.002 13:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:41.002 13:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:41.262 BaseBdev4_malloc 00:20:41.262 13:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:41.262 true 00:20:41.521 13:20:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:41.521 [2024-07-26 13:20:21.989162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:41.521 [2024-07-26 13:20:21.989200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.521 [2024-07-26 13:20:21.989223] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1fb3000 00:20:41.521 [2024-07-26 13:20:21.989235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.521 [2024-07-26 13:20:21.990562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.521 [2024-07-26 13:20:21.990590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:41.521 BaseBdev4 00:20:41.521 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:41.781 [2024-07-26 13:20:22.213782] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:41.781 [2024-07-26 13:20:22.214955] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:41.781 [2024-07-26 13:20:22.215020] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:41.781 [2024-07-26 13:20:22.215072] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:41.781 [2024-07-26 13:20:22.215284] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1fb3dd0 00:20:41.781 [2024-07-26 13:20:22.215295] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:41.781 [2024-07-26 13:20:22.215478] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f9efe0 00:20:41.781 [2024-07-26 13:20:22.215610] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1fb3dd0 00:20:41.781 [2024-07-26 13:20:22.215619] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1fb3dd0 00:20:41.781 [2024-07-26 13:20:22.215723] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.781 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:41.781 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:41.781 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:41.781 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:41.781 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:41.781 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:41.781 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.781 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.781 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.781 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.781 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.781 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.040 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.040 "name": "raid_bdev1", 00:20:42.040 "uuid": "9f781ecf-dbc4-4c2e-9fd6-4ad66b070443", 00:20:42.040 "strip_size_kb": 64, 00:20:42.040 "state": "online", 00:20:42.040 "raid_level": "concat", 00:20:42.040 "superblock": true, 00:20:42.040 "num_base_bdevs": 4, 00:20:42.040 "num_base_bdevs_discovered": 4, 00:20:42.040 "num_base_bdevs_operational": 4, 00:20:42.040 "base_bdevs_list": [ 00:20:42.040 { 00:20:42.040 "name": "BaseBdev1", 00:20:42.040 "uuid": "8e8f67bf-e12e-5343-ad0d-eddd07a2eb4b", 00:20:42.040 "is_configured": true, 00:20:42.040 "data_offset": 2048, 00:20:42.040 "data_size": 63488 00:20:42.040 }, 00:20:42.040 { 00:20:42.040 "name": "BaseBdev2", 00:20:42.040 "uuid": "fc413815-6679-5135-9475-0f8827b869a0", 00:20:42.040 "is_configured": true, 00:20:42.040 "data_offset": 2048, 00:20:42.040 "data_size": 63488 00:20:42.040 }, 00:20:42.040 { 00:20:42.040 "name": "BaseBdev3", 00:20:42.040 "uuid": "fe3c6830-f50c-5f52-aeff-dba392119afe", 00:20:42.040 "is_configured": true, 00:20:42.040 "data_offset": 2048, 00:20:42.040 "data_size": 63488 00:20:42.040 }, 00:20:42.040 { 00:20:42.040 "name": "BaseBdev4", 00:20:42.040 "uuid": "863ff1c7-d3c5-5832-bbec-6053f7024366", 00:20:42.040 "is_configured": true, 00:20:42.040 "data_offset": 2048, 00:20:42.040 "data_size": 63488 00:20:42.040 } 00:20:42.040 ] 00:20:42.040 }' 00:20:42.040 13:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.040 13:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.609 13:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:20:42.609 13:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:42.609 [2024-07-26 13:20:23.124413] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fb4c50 00:20:43.559 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.819 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.078 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:44.078 "name": "raid_bdev1", 00:20:44.078 "uuid": "9f781ecf-dbc4-4c2e-9fd6-4ad66b070443", 00:20:44.078 "strip_size_kb": 64, 00:20:44.078 "state": "online", 00:20:44.078 "raid_level": "concat", 00:20:44.078 "superblock": true, 00:20:44.078 "num_base_bdevs": 4, 00:20:44.078 "num_base_bdevs_discovered": 4, 00:20:44.078 "num_base_bdevs_operational": 4, 00:20:44.078 "base_bdevs_list": [ 00:20:44.078 { 00:20:44.078 "name": "BaseBdev1", 00:20:44.078 "uuid": "8e8f67bf-e12e-5343-ad0d-eddd07a2eb4b", 00:20:44.078 "is_configured": true, 00:20:44.078 "data_offset": 2048, 00:20:44.078 "data_size": 63488 00:20:44.078 }, 00:20:44.078 { 00:20:44.078 "name": "BaseBdev2", 00:20:44.078 "uuid": "fc413815-6679-5135-9475-0f8827b869a0", 00:20:44.078 "is_configured": true, 00:20:44.078 "data_offset": 2048, 00:20:44.078 "data_size": 63488 00:20:44.078 }, 00:20:44.078 { 00:20:44.078 "name": "BaseBdev3", 00:20:44.078 "uuid": "fe3c6830-f50c-5f52-aeff-dba392119afe", 00:20:44.078 "is_configured": true, 00:20:44.078 "data_offset": 2048, 00:20:44.078 "data_size": 63488 00:20:44.078 }, 00:20:44.078 { 00:20:44.078 "name": "BaseBdev4", 00:20:44.078 "uuid": "863ff1c7-d3c5-5832-bbec-6053f7024366", 00:20:44.078 "is_configured": true, 00:20:44.078 "data_offset": 2048, 00:20:44.078 "data_size": 63488 00:20:44.078 } 00:20:44.078 ] 00:20:44.078 }' 00:20:44.078 13:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:44.078 13:20:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.649 13:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:44.940 [2024-07-26 13:20:25.226655] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:44.940 [2024-07-26 13:20:25.226685] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:44.940 [2024-07-26 13:20:25.229594] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:44.940 [2024-07-26 13:20:25.229628] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.940 [2024-07-26 13:20:25.229664] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:44.940 [2024-07-26 13:20:25.229675] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fb3dd0 name raid_bdev1, state offline 00:20:44.940 0 00:20:44.940 13:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 759597 00:20:44.940 13:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 759597 ']' 00:20:44.940 13:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 759597 00:20:44.940 13:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:20:44.940 13:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:44.940 13:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 759597 00:20:44.940 13:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:44.940 13:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:44.940 13:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 759597' 00:20:44.940 killing process with pid 759597 00:20:44.940 13:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 759597 00:20:44.940 [2024-07-26 13:20:25.302207] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:44.940 13:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 759597 00:20:44.940 [2024-07-26 13:20:25.329322] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:45.200 13:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.NA12bay126 00:20:45.200 13:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:20:45.200 13:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:20:45.200 13:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.48 00:20:45.200 13:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:20:45.200 13:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:45.200 13:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:45.200 13:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.48 != \0\.\0\0 ]] 00:20:45.200 00:20:45.200 real 0m7.140s 00:20:45.200 user 0m11.373s 00:20:45.200 sys 0m1.218s 00:20:45.200 13:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:45.200 13:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.200 ************************************ 00:20:45.200 END TEST raid_read_error_test 00:20:45.200 ************************************ 00:20:45.200 13:20:25 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:20:45.200 13:20:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:45.200 13:20:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:45.200 13:20:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:45.200 ************************************ 00:20:45.200 START TEST raid_write_error_test 00:20:45.201 ************************************ 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.Gs6y9Jg0OW 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=760780 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 760780 /var/tmp/spdk-raid.sock 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 760780 ']' 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:45.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.201 13:20:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:45.461 [2024-07-26 13:20:25.738740] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:20:45.461 [2024-07-26 13:20:25.738872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid760780 ] 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:45.461 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:45.461 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:45.461 [2024-07-26 13:20:25.941773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.720 [2024-07-26 13:20:26.025093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.720 [2024-07-26 13:20:26.091087] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:45.720 [2024-07-26 13:20:26.091118] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:46.290 13:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:46.290 13:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:20:46.290 13:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:46.290 13:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:46.290 BaseBdev1_malloc 00:20:46.290 13:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:46.549 true 00:20:46.549 13:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:46.808 [2024-07-26 13:20:27.228015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:46.808 [2024-07-26 13:20:27.228055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.808 [2024-07-26 13:20:27.228073] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x153e190 00:20:46.808 [2024-07-26 13:20:27.228085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.809 [2024-07-26 13:20:27.229671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.809 [2024-07-26 13:20:27.229698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:46.809 BaseBdev1 00:20:46.809 13:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:46.809 13:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:47.068 BaseBdev2_malloc 00:20:47.068 13:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:47.327 true 00:20:47.327 13:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:47.587 [2024-07-26 13:20:27.885984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:47.587 [2024-07-26 13:20:27.886023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.587 [2024-07-26 13:20:27.886041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1542e20 00:20:47.587 [2024-07-26 13:20:27.886053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.587 [2024-07-26 13:20:27.887451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.587 [2024-07-26 13:20:27.887477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:47.587 BaseBdev2 00:20:47.587 13:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:47.587 13:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:47.587 BaseBdev3_malloc 00:20:47.846 13:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:47.846 true 00:20:47.846 13:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:48.105 [2024-07-26 13:20:28.544001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:48.105 [2024-07-26 13:20:28.544039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.105 [2024-07-26 13:20:28.544060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1543d90 00:20:48.105 [2024-07-26 13:20:28.544071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.105 [2024-07-26 13:20:28.545459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.105 [2024-07-26 13:20:28.545486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:48.105 BaseBdev3 00:20:48.105 13:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:48.105 13:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:48.365 BaseBdev4_malloc 00:20:48.365 13:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:20:48.624 true 00:20:48.624 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:20:48.883 [2024-07-26 13:20:29.213961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:20:48.883 [2024-07-26 13:20:29.214003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.883 [2024-07-26 13:20:29.214021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1546000 00:20:48.883 [2024-07-26 13:20:29.214033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.883 [2024-07-26 13:20:29.215473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.883 [2024-07-26 13:20:29.215499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:48.883 BaseBdev4 00:20:48.883 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:20:49.143 [2024-07-26 13:20:29.430572] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:49.143 [2024-07-26 13:20:29.431776] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:49.143 [2024-07-26 13:20:29.431840] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:49.143 [2024-07-26 13:20:29.431893] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:49.143 [2024-07-26 13:20:29.432090] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1546dd0 00:20:49.143 [2024-07-26 13:20:29.432101] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:20:49.143 [2024-07-26 13:20:29.432295] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1531fe0 00:20:49.143 [2024-07-26 13:20:29.432428] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1546dd0 00:20:49.143 [2024-07-26 13:20:29.432438] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1546dd0 00:20:49.143 [2024-07-26 13:20:29.432544] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.143 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.143 "name": "raid_bdev1", 00:20:49.143 "uuid": "b0e22ec2-1bca-4789-8b64-c893bbf85911", 00:20:49.143 "strip_size_kb": 64, 00:20:49.143 "state": "online", 00:20:49.143 "raid_level": "concat", 00:20:49.143 "superblock": true, 00:20:49.143 "num_base_bdevs": 4, 00:20:49.143 "num_base_bdevs_discovered": 4, 00:20:49.143 "num_base_bdevs_operational": 4, 00:20:49.143 "base_bdevs_list": [ 00:20:49.143 { 00:20:49.143 "name": "BaseBdev1", 00:20:49.143 "uuid": "3ca8960e-5faa-500c-83fc-c611f85c6596", 00:20:49.143 "is_configured": true, 00:20:49.143 "data_offset": 2048, 00:20:49.143 "data_size": 63488 00:20:49.143 }, 00:20:49.143 { 00:20:49.143 "name": "BaseBdev2", 00:20:49.143 "uuid": "c8e269ec-4a43-5c37-9e2f-264bf876f4ff", 00:20:49.143 "is_configured": true, 00:20:49.143 "data_offset": 2048, 00:20:49.143 "data_size": 63488 00:20:49.143 }, 00:20:49.143 { 00:20:49.143 "name": "BaseBdev3", 00:20:49.143 "uuid": "25d59e5c-7307-54d5-8937-bda5c82dfe17", 00:20:49.143 "is_configured": true, 00:20:49.143 "data_offset": 2048, 00:20:49.143 "data_size": 63488 00:20:49.144 }, 00:20:49.144 { 00:20:49.144 "name": "BaseBdev4", 00:20:49.144 "uuid": "cb1c394f-db7c-5893-9599-96163db1aeb5", 00:20:49.144 "is_configured": true, 00:20:49.144 "data_offset": 2048, 00:20:49.144 "data_size": 63488 00:20:49.144 } 00:20:49.144 ] 00:20:49.144 }' 00:20:49.144 13:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.144 13:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.712 13:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:20:49.712 13:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:49.971 [2024-07-26 13:20:30.305100] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1547c50 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.923 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.182 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.182 "name": "raid_bdev1", 00:20:51.182 "uuid": "b0e22ec2-1bca-4789-8b64-c893bbf85911", 00:20:51.182 "strip_size_kb": 64, 00:20:51.182 "state": "online", 00:20:51.182 "raid_level": "concat", 00:20:51.182 "superblock": true, 00:20:51.182 "num_base_bdevs": 4, 00:20:51.182 "num_base_bdevs_discovered": 4, 00:20:51.182 "num_base_bdevs_operational": 4, 00:20:51.182 "base_bdevs_list": [ 00:20:51.182 { 00:20:51.182 "name": "BaseBdev1", 00:20:51.182 "uuid": "3ca8960e-5faa-500c-83fc-c611f85c6596", 00:20:51.182 "is_configured": true, 00:20:51.182 "data_offset": 2048, 00:20:51.182 "data_size": 63488 00:20:51.182 }, 00:20:51.182 { 00:20:51.182 "name": "BaseBdev2", 00:20:51.182 "uuid": "c8e269ec-4a43-5c37-9e2f-264bf876f4ff", 00:20:51.182 "is_configured": true, 00:20:51.182 "data_offset": 2048, 00:20:51.182 "data_size": 63488 00:20:51.182 }, 00:20:51.182 { 00:20:51.182 "name": "BaseBdev3", 00:20:51.182 "uuid": "25d59e5c-7307-54d5-8937-bda5c82dfe17", 00:20:51.182 "is_configured": true, 00:20:51.182 "data_offset": 2048, 00:20:51.182 "data_size": 63488 00:20:51.182 }, 00:20:51.182 { 00:20:51.182 "name": "BaseBdev4", 00:20:51.182 "uuid": "cb1c394f-db7c-5893-9599-96163db1aeb5", 00:20:51.182 "is_configured": true, 00:20:51.182 "data_offset": 2048, 00:20:51.182 "data_size": 63488 00:20:51.182 } 00:20:51.182 ] 00:20:51.182 }' 00:20:51.182 13:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.182 13:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.750 13:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:52.009 [2024-07-26 13:20:32.455632] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:52.010 [2024-07-26 13:20:32.455657] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:52.010 [2024-07-26 13:20:32.458663] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:52.010 [2024-07-26 13:20:32.458698] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.010 [2024-07-26 13:20:32.458734] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:52.010 [2024-07-26 13:20:32.458744] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1546dd0 name raid_bdev1, state offline 00:20:52.010 0 00:20:52.010 13:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 760780 00:20:52.010 13:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 760780 ']' 00:20:52.010 13:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 760780 00:20:52.010 13:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:20:52.010 13:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:52.010 13:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 760780 00:20:52.010 13:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:52.010 13:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:52.010 13:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 760780' 00:20:52.010 killing process with pid 760780 00:20:52.010 13:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 760780 00:20:52.010 [2024-07-26 13:20:32.532650] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:52.010 13:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 760780 00:20:52.269 [2024-07-26 13:20:32.560625] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:52.269 13:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.Gs6y9Jg0OW 00:20:52.269 13:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:20:52.269 13:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:20:52.269 13:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:20:52.269 13:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:20:52.269 13:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:52.269 13:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:52.269 13:20:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:20:52.269 00:20:52.269 real 0m7.151s 00:20:52.269 user 0m11.328s 00:20:52.269 sys 0m1.291s 00:20:52.269 13:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:52.269 13:20:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.269 ************************************ 00:20:52.269 END TEST raid_write_error_test 00:20:52.269 ************************************ 00:20:52.529 13:20:32 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:20:52.529 13:20:32 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:20:52.529 13:20:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:52.529 13:20:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:52.529 13:20:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:52.529 ************************************ 00:20:52.529 START TEST raid_state_function_test 00:20:52.529 ************************************ 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=762180 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 762180' 00:20:52.529 Process raid pid: 762180 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 762180 /var/tmp/spdk-raid.sock 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 762180 ']' 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:52.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:52.529 13:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.529 [2024-07-26 13:20:32.920651] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:20:52.529 [2024-07-26 13:20:32.920698] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:52.529 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:52.529 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:52.529 [2024-07-26 13:20:33.040150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.789 [2024-07-26 13:20:33.121831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.789 [2024-07-26 13:20:33.180048] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:52.789 [2024-07-26 13:20:33.180084] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:53.725 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:53.725 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:20:53.725 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:53.985 [2024-07-26 13:20:34.263504] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:53.985 [2024-07-26 13:20:34.263544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:53.985 [2024-07-26 13:20:34.263554] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:53.985 [2024-07-26 13:20:34.263565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:53.985 [2024-07-26 13:20:34.263573] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:53.985 [2024-07-26 13:20:34.263583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:53.985 [2024-07-26 13:20:34.263591] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:53.985 [2024-07-26 13:20:34.263601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:53.985 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:53.985 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:53.985 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:53.985 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:53.985 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:53.985 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:53.985 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:53.985 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:53.985 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:53.985 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:53.985 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.985 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.244 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.244 "name": "Existed_Raid", 00:20:54.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.244 "strip_size_kb": 0, 00:20:54.244 "state": "configuring", 00:20:54.244 "raid_level": "raid1", 00:20:54.244 "superblock": false, 00:20:54.244 "num_base_bdevs": 4, 00:20:54.244 "num_base_bdevs_discovered": 0, 00:20:54.244 "num_base_bdevs_operational": 4, 00:20:54.244 "base_bdevs_list": [ 00:20:54.244 { 00:20:54.244 "name": "BaseBdev1", 00:20:54.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.244 "is_configured": false, 00:20:54.244 "data_offset": 0, 00:20:54.244 "data_size": 0 00:20:54.244 }, 00:20:54.244 { 00:20:54.244 "name": "BaseBdev2", 00:20:54.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.244 "is_configured": false, 00:20:54.244 "data_offset": 0, 00:20:54.244 "data_size": 0 00:20:54.244 }, 00:20:54.244 { 00:20:54.244 "name": "BaseBdev3", 00:20:54.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.244 "is_configured": false, 00:20:54.244 "data_offset": 0, 00:20:54.244 "data_size": 0 00:20:54.244 }, 00:20:54.244 { 00:20:54.244 "name": "BaseBdev4", 00:20:54.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.244 "is_configured": false, 00:20:54.244 "data_offset": 0, 00:20:54.244 "data_size": 0 00:20:54.244 } 00:20:54.244 ] 00:20:54.244 }' 00:20:54.244 13:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.244 13:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.812 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:54.812 [2024-07-26 13:20:35.286077] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:54.812 [2024-07-26 13:20:35.286111] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1186f60 name Existed_Raid, state configuring 00:20:54.812 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:55.071 [2024-07-26 13:20:35.514697] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:55.071 [2024-07-26 13:20:35.514721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:55.071 [2024-07-26 13:20:35.514729] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:55.071 [2024-07-26 13:20:35.514740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:55.071 [2024-07-26 13:20:35.514748] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:55.071 [2024-07-26 13:20:35.514758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:55.071 [2024-07-26 13:20:35.514765] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:55.071 [2024-07-26 13:20:35.514775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:55.071 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:55.330 [2024-07-26 13:20:35.752794] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:55.330 BaseBdev1 00:20:55.330 13:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:55.330 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:55.330 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:55.330 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:55.330 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:55.330 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:55.330 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:55.590 13:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:55.849 [ 00:20:55.849 { 00:20:55.849 "name": "BaseBdev1", 00:20:55.849 "aliases": [ 00:20:55.849 "15176353-ad96-49e1-8c81-e9b22429fe69" 00:20:55.849 ], 00:20:55.849 "product_name": "Malloc disk", 00:20:55.849 "block_size": 512, 00:20:55.849 "num_blocks": 65536, 00:20:55.849 "uuid": "15176353-ad96-49e1-8c81-e9b22429fe69", 00:20:55.849 "assigned_rate_limits": { 00:20:55.849 "rw_ios_per_sec": 0, 00:20:55.849 "rw_mbytes_per_sec": 0, 00:20:55.849 "r_mbytes_per_sec": 0, 00:20:55.849 "w_mbytes_per_sec": 0 00:20:55.849 }, 00:20:55.849 "claimed": true, 00:20:55.849 "claim_type": "exclusive_write", 00:20:55.849 "zoned": false, 00:20:55.849 "supported_io_types": { 00:20:55.849 "read": true, 00:20:55.849 "write": true, 00:20:55.849 "unmap": true, 00:20:55.849 "flush": true, 00:20:55.849 "reset": true, 00:20:55.849 "nvme_admin": false, 00:20:55.849 "nvme_io": false, 00:20:55.849 "nvme_io_md": false, 00:20:55.849 "write_zeroes": true, 00:20:55.849 "zcopy": true, 00:20:55.849 "get_zone_info": false, 00:20:55.849 "zone_management": false, 00:20:55.849 "zone_append": false, 00:20:55.849 "compare": false, 00:20:55.849 "compare_and_write": false, 00:20:55.849 "abort": true, 00:20:55.849 "seek_hole": false, 00:20:55.849 "seek_data": false, 00:20:55.849 "copy": true, 00:20:55.849 "nvme_iov_md": false 00:20:55.849 }, 00:20:55.849 "memory_domains": [ 00:20:55.849 { 00:20:55.849 "dma_device_id": "system", 00:20:55.849 "dma_device_type": 1 00:20:55.849 }, 00:20:55.849 { 00:20:55.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.849 "dma_device_type": 2 00:20:55.849 } 00:20:55.849 ], 00:20:55.849 "driver_specific": {} 00:20:55.849 } 00:20:55.849 ] 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.849 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.109 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:56.109 "name": "Existed_Raid", 00:20:56.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.109 "strip_size_kb": 0, 00:20:56.109 "state": "configuring", 00:20:56.109 "raid_level": "raid1", 00:20:56.109 "superblock": false, 00:20:56.109 "num_base_bdevs": 4, 00:20:56.109 "num_base_bdevs_discovered": 1, 00:20:56.109 "num_base_bdevs_operational": 4, 00:20:56.109 "base_bdevs_list": [ 00:20:56.109 { 00:20:56.109 "name": "BaseBdev1", 00:20:56.109 "uuid": "15176353-ad96-49e1-8c81-e9b22429fe69", 00:20:56.109 "is_configured": true, 00:20:56.109 "data_offset": 0, 00:20:56.109 "data_size": 65536 00:20:56.109 }, 00:20:56.109 { 00:20:56.109 "name": "BaseBdev2", 00:20:56.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.109 "is_configured": false, 00:20:56.109 "data_offset": 0, 00:20:56.109 "data_size": 0 00:20:56.109 }, 00:20:56.109 { 00:20:56.109 "name": "BaseBdev3", 00:20:56.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.109 "is_configured": false, 00:20:56.109 "data_offset": 0, 00:20:56.109 "data_size": 0 00:20:56.109 }, 00:20:56.109 { 00:20:56.109 "name": "BaseBdev4", 00:20:56.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.109 "is_configured": false, 00:20:56.109 "data_offset": 0, 00:20:56.109 "data_size": 0 00:20:56.109 } 00:20:56.109 ] 00:20:56.109 }' 00:20:56.109 13:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:56.109 13:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:56.678 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:56.937 [2024-07-26 13:20:37.244725] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:56.937 [2024-07-26 13:20:37.244765] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11867d0 name Existed_Raid, state configuring 00:20:56.937 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:57.197 [2024-07-26 13:20:37.473360] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:57.197 [2024-07-26 13:20:37.474784] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:57.197 [2024-07-26 13:20:37.474822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:57.197 [2024-07-26 13:20:37.474831] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:57.197 [2024-07-26 13:20:37.474842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:57.197 [2024-07-26 13:20:37.474850] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:57.197 [2024-07-26 13:20:37.474860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.197 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.457 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:57.457 "name": "Existed_Raid", 00:20:57.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.457 "strip_size_kb": 0, 00:20:57.457 "state": "configuring", 00:20:57.457 "raid_level": "raid1", 00:20:57.457 "superblock": false, 00:20:57.457 "num_base_bdevs": 4, 00:20:57.457 "num_base_bdevs_discovered": 1, 00:20:57.457 "num_base_bdevs_operational": 4, 00:20:57.457 "base_bdevs_list": [ 00:20:57.457 { 00:20:57.457 "name": "BaseBdev1", 00:20:57.457 "uuid": "15176353-ad96-49e1-8c81-e9b22429fe69", 00:20:57.457 "is_configured": true, 00:20:57.457 "data_offset": 0, 00:20:57.457 "data_size": 65536 00:20:57.457 }, 00:20:57.457 { 00:20:57.457 "name": "BaseBdev2", 00:20:57.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.457 "is_configured": false, 00:20:57.457 "data_offset": 0, 00:20:57.457 "data_size": 0 00:20:57.457 }, 00:20:57.457 { 00:20:57.457 "name": "BaseBdev3", 00:20:57.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.457 "is_configured": false, 00:20:57.457 "data_offset": 0, 00:20:57.457 "data_size": 0 00:20:57.457 }, 00:20:57.457 { 00:20:57.457 "name": "BaseBdev4", 00:20:57.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.457 "is_configured": false, 00:20:57.457 "data_offset": 0, 00:20:57.457 "data_size": 0 00:20:57.457 } 00:20:57.457 ] 00:20:57.457 }' 00:20:57.457 13:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:57.457 13:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.028 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:58.028 [2024-07-26 13:20:38.499277] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:58.028 BaseBdev2 00:20:58.028 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:58.028 13:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:58.028 13:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:58.028 13:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:58.028 13:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:58.028 13:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:58.028 13:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:58.287 13:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:58.546 [ 00:20:58.546 { 00:20:58.546 "name": "BaseBdev2", 00:20:58.546 "aliases": [ 00:20:58.546 "2520df01-5953-4a29-9b25-53095d233b6b" 00:20:58.546 ], 00:20:58.546 "product_name": "Malloc disk", 00:20:58.546 "block_size": 512, 00:20:58.546 "num_blocks": 65536, 00:20:58.546 "uuid": "2520df01-5953-4a29-9b25-53095d233b6b", 00:20:58.546 "assigned_rate_limits": { 00:20:58.546 "rw_ios_per_sec": 0, 00:20:58.546 "rw_mbytes_per_sec": 0, 00:20:58.546 "r_mbytes_per_sec": 0, 00:20:58.546 "w_mbytes_per_sec": 0 00:20:58.546 }, 00:20:58.546 "claimed": true, 00:20:58.546 "claim_type": "exclusive_write", 00:20:58.546 "zoned": false, 00:20:58.546 "supported_io_types": { 00:20:58.546 "read": true, 00:20:58.546 "write": true, 00:20:58.546 "unmap": true, 00:20:58.546 "flush": true, 00:20:58.546 "reset": true, 00:20:58.546 "nvme_admin": false, 00:20:58.546 "nvme_io": false, 00:20:58.546 "nvme_io_md": false, 00:20:58.546 "write_zeroes": true, 00:20:58.546 "zcopy": true, 00:20:58.546 "get_zone_info": false, 00:20:58.546 "zone_management": false, 00:20:58.546 "zone_append": false, 00:20:58.546 "compare": false, 00:20:58.546 "compare_and_write": false, 00:20:58.546 "abort": true, 00:20:58.546 "seek_hole": false, 00:20:58.546 "seek_data": false, 00:20:58.546 "copy": true, 00:20:58.546 "nvme_iov_md": false 00:20:58.546 }, 00:20:58.546 "memory_domains": [ 00:20:58.546 { 00:20:58.546 "dma_device_id": "system", 00:20:58.546 "dma_device_type": 1 00:20:58.546 }, 00:20:58.546 { 00:20:58.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.546 "dma_device_type": 2 00:20:58.546 } 00:20:58.546 ], 00:20:58.546 "driver_specific": {} 00:20:58.546 } 00:20:58.546 ] 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.546 13:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.806 13:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:58.806 "name": "Existed_Raid", 00:20:58.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.806 "strip_size_kb": 0, 00:20:58.806 "state": "configuring", 00:20:58.806 "raid_level": "raid1", 00:20:58.806 "superblock": false, 00:20:58.806 "num_base_bdevs": 4, 00:20:58.806 "num_base_bdevs_discovered": 2, 00:20:58.806 "num_base_bdevs_operational": 4, 00:20:58.806 "base_bdevs_list": [ 00:20:58.806 { 00:20:58.806 "name": "BaseBdev1", 00:20:58.806 "uuid": "15176353-ad96-49e1-8c81-e9b22429fe69", 00:20:58.806 "is_configured": true, 00:20:58.806 "data_offset": 0, 00:20:58.806 "data_size": 65536 00:20:58.806 }, 00:20:58.806 { 00:20:58.806 "name": "BaseBdev2", 00:20:58.806 "uuid": "2520df01-5953-4a29-9b25-53095d233b6b", 00:20:58.806 "is_configured": true, 00:20:58.806 "data_offset": 0, 00:20:58.806 "data_size": 65536 00:20:58.806 }, 00:20:58.806 { 00:20:58.806 "name": "BaseBdev3", 00:20:58.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.806 "is_configured": false, 00:20:58.806 "data_offset": 0, 00:20:58.806 "data_size": 0 00:20:58.806 }, 00:20:58.806 { 00:20:58.806 "name": "BaseBdev4", 00:20:58.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.806 "is_configured": false, 00:20:58.806 "data_offset": 0, 00:20:58.806 "data_size": 0 00:20:58.806 } 00:20:58.806 ] 00:20:58.806 }' 00:20:58.806 13:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:58.806 13:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.374 13:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:59.374 [2024-07-26 13:20:39.878072] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:59.374 BaseBdev3 00:20:59.374 13:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:59.374 13:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:59.374 13:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:59.374 13:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:59.374 13:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:59.374 13:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:59.374 13:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:59.634 13:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:59.893 [ 00:20:59.893 { 00:20:59.893 "name": "BaseBdev3", 00:20:59.893 "aliases": [ 00:20:59.893 "8535f156-3ad2-43fa-b719-7e4a10639445" 00:20:59.893 ], 00:20:59.893 "product_name": "Malloc disk", 00:20:59.893 "block_size": 512, 00:20:59.893 "num_blocks": 65536, 00:20:59.893 "uuid": "8535f156-3ad2-43fa-b719-7e4a10639445", 00:20:59.893 "assigned_rate_limits": { 00:20:59.893 "rw_ios_per_sec": 0, 00:20:59.893 "rw_mbytes_per_sec": 0, 00:20:59.893 "r_mbytes_per_sec": 0, 00:20:59.893 "w_mbytes_per_sec": 0 00:20:59.893 }, 00:20:59.893 "claimed": true, 00:20:59.893 "claim_type": "exclusive_write", 00:20:59.893 "zoned": false, 00:20:59.894 "supported_io_types": { 00:20:59.894 "read": true, 00:20:59.894 "write": true, 00:20:59.894 "unmap": true, 00:20:59.894 "flush": true, 00:20:59.894 "reset": true, 00:20:59.894 "nvme_admin": false, 00:20:59.894 "nvme_io": false, 00:20:59.894 "nvme_io_md": false, 00:20:59.894 "write_zeroes": true, 00:20:59.894 "zcopy": true, 00:20:59.894 "get_zone_info": false, 00:20:59.894 "zone_management": false, 00:20:59.894 "zone_append": false, 00:20:59.894 "compare": false, 00:20:59.894 "compare_and_write": false, 00:20:59.894 "abort": true, 00:20:59.894 "seek_hole": false, 00:20:59.894 "seek_data": false, 00:20:59.894 "copy": true, 00:20:59.894 "nvme_iov_md": false 00:20:59.894 }, 00:20:59.894 "memory_domains": [ 00:20:59.894 { 00:20:59.894 "dma_device_id": "system", 00:20:59.894 "dma_device_type": 1 00:20:59.894 }, 00:20:59.894 { 00:20:59.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.894 "dma_device_type": 2 00:20:59.894 } 00:20:59.894 ], 00:20:59.894 "driver_specific": {} 00:20:59.894 } 00:20:59.894 ] 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.894 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.153 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.153 "name": "Existed_Raid", 00:21:00.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.153 "strip_size_kb": 0, 00:21:00.153 "state": "configuring", 00:21:00.153 "raid_level": "raid1", 00:21:00.153 "superblock": false, 00:21:00.153 "num_base_bdevs": 4, 00:21:00.153 "num_base_bdevs_discovered": 3, 00:21:00.153 "num_base_bdevs_operational": 4, 00:21:00.153 "base_bdevs_list": [ 00:21:00.153 { 00:21:00.153 "name": "BaseBdev1", 00:21:00.153 "uuid": "15176353-ad96-49e1-8c81-e9b22429fe69", 00:21:00.153 "is_configured": true, 00:21:00.153 "data_offset": 0, 00:21:00.153 "data_size": 65536 00:21:00.153 }, 00:21:00.153 { 00:21:00.153 "name": "BaseBdev2", 00:21:00.153 "uuid": "2520df01-5953-4a29-9b25-53095d233b6b", 00:21:00.153 "is_configured": true, 00:21:00.153 "data_offset": 0, 00:21:00.153 "data_size": 65536 00:21:00.153 }, 00:21:00.153 { 00:21:00.153 "name": "BaseBdev3", 00:21:00.153 "uuid": "8535f156-3ad2-43fa-b719-7e4a10639445", 00:21:00.153 "is_configured": true, 00:21:00.153 "data_offset": 0, 00:21:00.153 "data_size": 65536 00:21:00.153 }, 00:21:00.153 { 00:21:00.153 "name": "BaseBdev4", 00:21:00.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.153 "is_configured": false, 00:21:00.153 "data_offset": 0, 00:21:00.153 "data_size": 0 00:21:00.153 } 00:21:00.153 ] 00:21:00.153 }' 00:21:00.153 13:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.153 13:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.722 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:00.982 [2024-07-26 13:20:41.353357] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:00.982 [2024-07-26 13:20:41.353396] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1187840 00:21:00.982 [2024-07-26 13:20:41.353404] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:00.982 [2024-07-26 13:20:41.353590] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1187480 00:21:00.982 [2024-07-26 13:20:41.353708] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1187840 00:21:00.982 [2024-07-26 13:20:41.353717] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1187840 00:21:00.982 [2024-07-26 13:20:41.353871] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.982 BaseBdev4 00:21:00.982 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:00.982 13:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:00.982 13:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:00.982 13:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:00.982 13:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:00.982 13:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:00.982 13:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:01.242 13:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:01.501 [ 00:21:01.501 { 00:21:01.502 "name": "BaseBdev4", 00:21:01.502 "aliases": [ 00:21:01.502 "a4a31f02-abc1-47f3-810a-f1445bf739da" 00:21:01.502 ], 00:21:01.502 "product_name": "Malloc disk", 00:21:01.502 "block_size": 512, 00:21:01.502 "num_blocks": 65536, 00:21:01.502 "uuid": "a4a31f02-abc1-47f3-810a-f1445bf739da", 00:21:01.502 "assigned_rate_limits": { 00:21:01.502 "rw_ios_per_sec": 0, 00:21:01.502 "rw_mbytes_per_sec": 0, 00:21:01.502 "r_mbytes_per_sec": 0, 00:21:01.502 "w_mbytes_per_sec": 0 00:21:01.502 }, 00:21:01.502 "claimed": true, 00:21:01.502 "claim_type": "exclusive_write", 00:21:01.502 "zoned": false, 00:21:01.502 "supported_io_types": { 00:21:01.502 "read": true, 00:21:01.502 "write": true, 00:21:01.502 "unmap": true, 00:21:01.502 "flush": true, 00:21:01.502 "reset": true, 00:21:01.502 "nvme_admin": false, 00:21:01.502 "nvme_io": false, 00:21:01.502 "nvme_io_md": false, 00:21:01.502 "write_zeroes": true, 00:21:01.502 "zcopy": true, 00:21:01.502 "get_zone_info": false, 00:21:01.502 "zone_management": false, 00:21:01.502 "zone_append": false, 00:21:01.502 "compare": false, 00:21:01.502 "compare_and_write": false, 00:21:01.502 "abort": true, 00:21:01.502 "seek_hole": false, 00:21:01.502 "seek_data": false, 00:21:01.502 "copy": true, 00:21:01.502 "nvme_iov_md": false 00:21:01.502 }, 00:21:01.502 "memory_domains": [ 00:21:01.502 { 00:21:01.502 "dma_device_id": "system", 00:21:01.502 "dma_device_type": 1 00:21:01.502 }, 00:21:01.502 { 00:21:01.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.502 "dma_device_type": 2 00:21:01.502 } 00:21:01.502 ], 00:21:01.502 "driver_specific": {} 00:21:01.502 } 00:21:01.502 ] 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.502 13:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.761 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:01.761 "name": "Existed_Raid", 00:21:01.761 "uuid": "d24efb2b-2655-4297-8dd5-19168dc9946c", 00:21:01.761 "strip_size_kb": 0, 00:21:01.761 "state": "online", 00:21:01.761 "raid_level": "raid1", 00:21:01.761 "superblock": false, 00:21:01.761 "num_base_bdevs": 4, 00:21:01.761 "num_base_bdevs_discovered": 4, 00:21:01.761 "num_base_bdevs_operational": 4, 00:21:01.761 "base_bdevs_list": [ 00:21:01.761 { 00:21:01.761 "name": "BaseBdev1", 00:21:01.761 "uuid": "15176353-ad96-49e1-8c81-e9b22429fe69", 00:21:01.761 "is_configured": true, 00:21:01.761 "data_offset": 0, 00:21:01.761 "data_size": 65536 00:21:01.761 }, 00:21:01.761 { 00:21:01.761 "name": "BaseBdev2", 00:21:01.761 "uuid": "2520df01-5953-4a29-9b25-53095d233b6b", 00:21:01.761 "is_configured": true, 00:21:01.761 "data_offset": 0, 00:21:01.761 "data_size": 65536 00:21:01.761 }, 00:21:01.761 { 00:21:01.761 "name": "BaseBdev3", 00:21:01.761 "uuid": "8535f156-3ad2-43fa-b719-7e4a10639445", 00:21:01.761 "is_configured": true, 00:21:01.761 "data_offset": 0, 00:21:01.761 "data_size": 65536 00:21:01.761 }, 00:21:01.761 { 00:21:01.761 "name": "BaseBdev4", 00:21:01.761 "uuid": "a4a31f02-abc1-47f3-810a-f1445bf739da", 00:21:01.761 "is_configured": true, 00:21:01.761 "data_offset": 0, 00:21:01.761 "data_size": 65536 00:21:01.761 } 00:21:01.762 ] 00:21:01.762 }' 00:21:01.762 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:01.762 13:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.330 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:02.330 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:02.330 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:02.330 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:02.330 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:02.330 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:02.330 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:02.330 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:02.330 [2024-07-26 13:20:42.845579] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:02.589 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:02.589 "name": "Existed_Raid", 00:21:02.589 "aliases": [ 00:21:02.589 "d24efb2b-2655-4297-8dd5-19168dc9946c" 00:21:02.589 ], 00:21:02.589 "product_name": "Raid Volume", 00:21:02.589 "block_size": 512, 00:21:02.589 "num_blocks": 65536, 00:21:02.589 "uuid": "d24efb2b-2655-4297-8dd5-19168dc9946c", 00:21:02.589 "assigned_rate_limits": { 00:21:02.589 "rw_ios_per_sec": 0, 00:21:02.589 "rw_mbytes_per_sec": 0, 00:21:02.589 "r_mbytes_per_sec": 0, 00:21:02.589 "w_mbytes_per_sec": 0 00:21:02.589 }, 00:21:02.589 "claimed": false, 00:21:02.589 "zoned": false, 00:21:02.589 "supported_io_types": { 00:21:02.589 "read": true, 00:21:02.589 "write": true, 00:21:02.589 "unmap": false, 00:21:02.589 "flush": false, 00:21:02.589 "reset": true, 00:21:02.589 "nvme_admin": false, 00:21:02.589 "nvme_io": false, 00:21:02.589 "nvme_io_md": false, 00:21:02.589 "write_zeroes": true, 00:21:02.589 "zcopy": false, 00:21:02.589 "get_zone_info": false, 00:21:02.589 "zone_management": false, 00:21:02.589 "zone_append": false, 00:21:02.589 "compare": false, 00:21:02.589 "compare_and_write": false, 00:21:02.589 "abort": false, 00:21:02.589 "seek_hole": false, 00:21:02.589 "seek_data": false, 00:21:02.589 "copy": false, 00:21:02.589 "nvme_iov_md": false 00:21:02.589 }, 00:21:02.589 "memory_domains": [ 00:21:02.589 { 00:21:02.589 "dma_device_id": "system", 00:21:02.589 "dma_device_type": 1 00:21:02.589 }, 00:21:02.589 { 00:21:02.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.589 "dma_device_type": 2 00:21:02.589 }, 00:21:02.589 { 00:21:02.589 "dma_device_id": "system", 00:21:02.589 "dma_device_type": 1 00:21:02.589 }, 00:21:02.589 { 00:21:02.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.589 "dma_device_type": 2 00:21:02.589 }, 00:21:02.589 { 00:21:02.589 "dma_device_id": "system", 00:21:02.589 "dma_device_type": 1 00:21:02.589 }, 00:21:02.589 { 00:21:02.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.589 "dma_device_type": 2 00:21:02.589 }, 00:21:02.589 { 00:21:02.589 "dma_device_id": "system", 00:21:02.589 "dma_device_type": 1 00:21:02.589 }, 00:21:02.589 { 00:21:02.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.589 "dma_device_type": 2 00:21:02.589 } 00:21:02.590 ], 00:21:02.590 "driver_specific": { 00:21:02.590 "raid": { 00:21:02.590 "uuid": "d24efb2b-2655-4297-8dd5-19168dc9946c", 00:21:02.590 "strip_size_kb": 0, 00:21:02.590 "state": "online", 00:21:02.590 "raid_level": "raid1", 00:21:02.590 "superblock": false, 00:21:02.590 "num_base_bdevs": 4, 00:21:02.590 "num_base_bdevs_discovered": 4, 00:21:02.590 "num_base_bdevs_operational": 4, 00:21:02.590 "base_bdevs_list": [ 00:21:02.590 { 00:21:02.590 "name": "BaseBdev1", 00:21:02.590 "uuid": "15176353-ad96-49e1-8c81-e9b22429fe69", 00:21:02.590 "is_configured": true, 00:21:02.590 "data_offset": 0, 00:21:02.590 "data_size": 65536 00:21:02.590 }, 00:21:02.590 { 00:21:02.590 "name": "BaseBdev2", 00:21:02.590 "uuid": "2520df01-5953-4a29-9b25-53095d233b6b", 00:21:02.590 "is_configured": true, 00:21:02.590 "data_offset": 0, 00:21:02.590 "data_size": 65536 00:21:02.590 }, 00:21:02.590 { 00:21:02.590 "name": "BaseBdev3", 00:21:02.590 "uuid": "8535f156-3ad2-43fa-b719-7e4a10639445", 00:21:02.590 "is_configured": true, 00:21:02.590 "data_offset": 0, 00:21:02.590 "data_size": 65536 00:21:02.590 }, 00:21:02.590 { 00:21:02.590 "name": "BaseBdev4", 00:21:02.590 "uuid": "a4a31f02-abc1-47f3-810a-f1445bf739da", 00:21:02.590 "is_configured": true, 00:21:02.590 "data_offset": 0, 00:21:02.590 "data_size": 65536 00:21:02.590 } 00:21:02.590 ] 00:21:02.590 } 00:21:02.590 } 00:21:02.590 }' 00:21:02.590 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:02.590 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:02.590 BaseBdev2 00:21:02.590 BaseBdev3 00:21:02.590 BaseBdev4' 00:21:02.590 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:02.590 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:02.590 13:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:02.849 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:02.849 "name": "BaseBdev1", 00:21:02.849 "aliases": [ 00:21:02.849 "15176353-ad96-49e1-8c81-e9b22429fe69" 00:21:02.849 ], 00:21:02.849 "product_name": "Malloc disk", 00:21:02.849 "block_size": 512, 00:21:02.849 "num_blocks": 65536, 00:21:02.849 "uuid": "15176353-ad96-49e1-8c81-e9b22429fe69", 00:21:02.849 "assigned_rate_limits": { 00:21:02.849 "rw_ios_per_sec": 0, 00:21:02.849 "rw_mbytes_per_sec": 0, 00:21:02.849 "r_mbytes_per_sec": 0, 00:21:02.849 "w_mbytes_per_sec": 0 00:21:02.849 }, 00:21:02.849 "claimed": true, 00:21:02.849 "claim_type": "exclusive_write", 00:21:02.849 "zoned": false, 00:21:02.849 "supported_io_types": { 00:21:02.849 "read": true, 00:21:02.849 "write": true, 00:21:02.849 "unmap": true, 00:21:02.849 "flush": true, 00:21:02.849 "reset": true, 00:21:02.849 "nvme_admin": false, 00:21:02.849 "nvme_io": false, 00:21:02.849 "nvme_io_md": false, 00:21:02.849 "write_zeroes": true, 00:21:02.849 "zcopy": true, 00:21:02.849 "get_zone_info": false, 00:21:02.849 "zone_management": false, 00:21:02.849 "zone_append": false, 00:21:02.849 "compare": false, 00:21:02.849 "compare_and_write": false, 00:21:02.849 "abort": true, 00:21:02.849 "seek_hole": false, 00:21:02.849 "seek_data": false, 00:21:02.849 "copy": true, 00:21:02.849 "nvme_iov_md": false 00:21:02.849 }, 00:21:02.849 "memory_domains": [ 00:21:02.849 { 00:21:02.849 "dma_device_id": "system", 00:21:02.849 "dma_device_type": 1 00:21:02.849 }, 00:21:02.849 { 00:21:02.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.849 "dma_device_type": 2 00:21:02.849 } 00:21:02.849 ], 00:21:02.849 "driver_specific": {} 00:21:02.849 }' 00:21:02.849 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:02.849 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:02.849 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:02.849 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:02.849 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:02.849 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:02.849 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:02.849 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.108 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:03.108 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.108 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.108 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:03.108 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:03.108 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:03.108 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:03.367 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:03.367 "name": "BaseBdev2", 00:21:03.367 "aliases": [ 00:21:03.367 "2520df01-5953-4a29-9b25-53095d233b6b" 00:21:03.367 ], 00:21:03.367 "product_name": "Malloc disk", 00:21:03.367 "block_size": 512, 00:21:03.367 "num_blocks": 65536, 00:21:03.367 "uuid": "2520df01-5953-4a29-9b25-53095d233b6b", 00:21:03.367 "assigned_rate_limits": { 00:21:03.367 "rw_ios_per_sec": 0, 00:21:03.367 "rw_mbytes_per_sec": 0, 00:21:03.367 "r_mbytes_per_sec": 0, 00:21:03.367 "w_mbytes_per_sec": 0 00:21:03.367 }, 00:21:03.367 "claimed": true, 00:21:03.367 "claim_type": "exclusive_write", 00:21:03.367 "zoned": false, 00:21:03.367 "supported_io_types": { 00:21:03.367 "read": true, 00:21:03.367 "write": true, 00:21:03.367 "unmap": true, 00:21:03.367 "flush": true, 00:21:03.367 "reset": true, 00:21:03.367 "nvme_admin": false, 00:21:03.367 "nvme_io": false, 00:21:03.367 "nvme_io_md": false, 00:21:03.367 "write_zeroes": true, 00:21:03.367 "zcopy": true, 00:21:03.367 "get_zone_info": false, 00:21:03.367 "zone_management": false, 00:21:03.367 "zone_append": false, 00:21:03.367 "compare": false, 00:21:03.367 "compare_and_write": false, 00:21:03.367 "abort": true, 00:21:03.367 "seek_hole": false, 00:21:03.367 "seek_data": false, 00:21:03.367 "copy": true, 00:21:03.367 "nvme_iov_md": false 00:21:03.367 }, 00:21:03.367 "memory_domains": [ 00:21:03.367 { 00:21:03.367 "dma_device_id": "system", 00:21:03.367 "dma_device_type": 1 00:21:03.367 }, 00:21:03.367 { 00:21:03.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.367 "dma_device_type": 2 00:21:03.367 } 00:21:03.367 ], 00:21:03.367 "driver_specific": {} 00:21:03.367 }' 00:21:03.367 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.367 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.367 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:03.367 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.367 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.367 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:03.368 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.368 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.627 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:03.627 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.627 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.627 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:03.627 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:03.627 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:03.627 13:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:03.627 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:03.627 "name": "BaseBdev3", 00:21:03.627 "aliases": [ 00:21:03.627 "8535f156-3ad2-43fa-b719-7e4a10639445" 00:21:03.627 ], 00:21:03.627 "product_name": "Malloc disk", 00:21:03.627 "block_size": 512, 00:21:03.627 "num_blocks": 65536, 00:21:03.627 "uuid": "8535f156-3ad2-43fa-b719-7e4a10639445", 00:21:03.627 "assigned_rate_limits": { 00:21:03.627 "rw_ios_per_sec": 0, 00:21:03.627 "rw_mbytes_per_sec": 0, 00:21:03.627 "r_mbytes_per_sec": 0, 00:21:03.627 "w_mbytes_per_sec": 0 00:21:03.627 }, 00:21:03.627 "claimed": true, 00:21:03.627 "claim_type": "exclusive_write", 00:21:03.627 "zoned": false, 00:21:03.627 "supported_io_types": { 00:21:03.627 "read": true, 00:21:03.627 "write": true, 00:21:03.627 "unmap": true, 00:21:03.627 "flush": true, 00:21:03.627 "reset": true, 00:21:03.627 "nvme_admin": false, 00:21:03.627 "nvme_io": false, 00:21:03.627 "nvme_io_md": false, 00:21:03.627 "write_zeroes": true, 00:21:03.627 "zcopy": true, 00:21:03.627 "get_zone_info": false, 00:21:03.627 "zone_management": false, 00:21:03.627 "zone_append": false, 00:21:03.627 "compare": false, 00:21:03.627 "compare_and_write": false, 00:21:03.627 "abort": true, 00:21:03.627 "seek_hole": false, 00:21:03.627 "seek_data": false, 00:21:03.627 "copy": true, 00:21:03.627 "nvme_iov_md": false 00:21:03.627 }, 00:21:03.627 "memory_domains": [ 00:21:03.627 { 00:21:03.627 "dma_device_id": "system", 00:21:03.627 "dma_device_type": 1 00:21:03.627 }, 00:21:03.627 { 00:21:03.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.627 "dma_device_type": 2 00:21:03.627 } 00:21:03.627 ], 00:21:03.627 "driver_specific": {} 00:21:03.627 }' 00:21:03.627 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.886 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.886 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:03.886 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.886 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.886 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:03.886 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.886 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.886 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:03.886 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.145 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.145 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:04.145 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:04.145 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:04.145 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:04.145 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:04.145 "name": "BaseBdev4", 00:21:04.145 "aliases": [ 00:21:04.145 "a4a31f02-abc1-47f3-810a-f1445bf739da" 00:21:04.145 ], 00:21:04.145 "product_name": "Malloc disk", 00:21:04.145 "block_size": 512, 00:21:04.145 "num_blocks": 65536, 00:21:04.145 "uuid": "a4a31f02-abc1-47f3-810a-f1445bf739da", 00:21:04.145 "assigned_rate_limits": { 00:21:04.145 "rw_ios_per_sec": 0, 00:21:04.145 "rw_mbytes_per_sec": 0, 00:21:04.145 "r_mbytes_per_sec": 0, 00:21:04.145 "w_mbytes_per_sec": 0 00:21:04.145 }, 00:21:04.145 "claimed": true, 00:21:04.145 "claim_type": "exclusive_write", 00:21:04.145 "zoned": false, 00:21:04.145 "supported_io_types": { 00:21:04.145 "read": true, 00:21:04.145 "write": true, 00:21:04.145 "unmap": true, 00:21:04.145 "flush": true, 00:21:04.145 "reset": true, 00:21:04.145 "nvme_admin": false, 00:21:04.145 "nvme_io": false, 00:21:04.145 "nvme_io_md": false, 00:21:04.145 "write_zeroes": true, 00:21:04.145 "zcopy": true, 00:21:04.145 "get_zone_info": false, 00:21:04.145 "zone_management": false, 00:21:04.145 "zone_append": false, 00:21:04.145 "compare": false, 00:21:04.145 "compare_and_write": false, 00:21:04.146 "abort": true, 00:21:04.146 "seek_hole": false, 00:21:04.146 "seek_data": false, 00:21:04.146 "copy": true, 00:21:04.146 "nvme_iov_md": false 00:21:04.146 }, 00:21:04.146 "memory_domains": [ 00:21:04.146 { 00:21:04.146 "dma_device_id": "system", 00:21:04.146 "dma_device_type": 1 00:21:04.146 }, 00:21:04.146 { 00:21:04.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.146 "dma_device_type": 2 00:21:04.146 } 00:21:04.146 ], 00:21:04.146 "driver_specific": {} 00:21:04.146 }' 00:21:04.404 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.404 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.404 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:04.404 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.404 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.404 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:04.404 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.404 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.404 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:04.404 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.662 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.662 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:04.662 13:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:04.922 [2024-07-26 13:20:45.195594] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.922 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.181 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.181 "name": "Existed_Raid", 00:21:05.181 "uuid": "d24efb2b-2655-4297-8dd5-19168dc9946c", 00:21:05.181 "strip_size_kb": 0, 00:21:05.181 "state": "online", 00:21:05.181 "raid_level": "raid1", 00:21:05.181 "superblock": false, 00:21:05.181 "num_base_bdevs": 4, 00:21:05.181 "num_base_bdevs_discovered": 3, 00:21:05.181 "num_base_bdevs_operational": 3, 00:21:05.181 "base_bdevs_list": [ 00:21:05.181 { 00:21:05.181 "name": null, 00:21:05.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.181 "is_configured": false, 00:21:05.181 "data_offset": 0, 00:21:05.181 "data_size": 65536 00:21:05.181 }, 00:21:05.181 { 00:21:05.181 "name": "BaseBdev2", 00:21:05.181 "uuid": "2520df01-5953-4a29-9b25-53095d233b6b", 00:21:05.181 "is_configured": true, 00:21:05.181 "data_offset": 0, 00:21:05.181 "data_size": 65536 00:21:05.181 }, 00:21:05.181 { 00:21:05.181 "name": "BaseBdev3", 00:21:05.181 "uuid": "8535f156-3ad2-43fa-b719-7e4a10639445", 00:21:05.181 "is_configured": true, 00:21:05.181 "data_offset": 0, 00:21:05.181 "data_size": 65536 00:21:05.181 }, 00:21:05.181 { 00:21:05.181 "name": "BaseBdev4", 00:21:05.181 "uuid": "a4a31f02-abc1-47f3-810a-f1445bf739da", 00:21:05.181 "is_configured": true, 00:21:05.181 "data_offset": 0, 00:21:05.181 "data_size": 65536 00:21:05.181 } 00:21:05.181 ] 00:21:05.181 }' 00:21:05.181 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.181 13:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.440 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:05.440 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:05.440 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.440 13:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:05.699 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:05.699 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:05.699 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:05.958 [2024-07-26 13:20:46.387812] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:05.958 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:05.958 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:05.958 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.958 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:06.217 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:06.217 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:06.217 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:06.475 [2024-07-26 13:20:46.847063] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:06.475 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:06.475 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:06.475 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.475 13:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:06.734 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:06.734 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:06.734 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:06.993 [2024-07-26 13:20:47.314331] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:06.993 [2024-07-26 13:20:47.314408] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:06.993 [2024-07-26 13:20:47.324739] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.993 [2024-07-26 13:20:47.324771] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:06.993 [2024-07-26 13:20:47.324782] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1187840 name Existed_Raid, state offline 00:21:06.993 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:06.993 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:06.993 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.993 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:07.252 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:07.252 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:07.252 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:07.252 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:07.252 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:07.252 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:07.511 BaseBdev2 00:21:07.511 13:20:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:07.511 13:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:07.511 13:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:07.511 13:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:07.511 13:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:07.511 13:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:07.511 13:20:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:07.511 13:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:07.771 [ 00:21:07.771 { 00:21:07.771 "name": "BaseBdev2", 00:21:07.771 "aliases": [ 00:21:07.771 "d74d1a2d-1744-4d7f-9112-42b04898994e" 00:21:07.771 ], 00:21:07.771 "product_name": "Malloc disk", 00:21:07.771 "block_size": 512, 00:21:07.771 "num_blocks": 65536, 00:21:07.771 "uuid": "d74d1a2d-1744-4d7f-9112-42b04898994e", 00:21:07.771 "assigned_rate_limits": { 00:21:07.771 "rw_ios_per_sec": 0, 00:21:07.771 "rw_mbytes_per_sec": 0, 00:21:07.771 "r_mbytes_per_sec": 0, 00:21:07.771 "w_mbytes_per_sec": 0 00:21:07.771 }, 00:21:07.771 "claimed": false, 00:21:07.771 "zoned": false, 00:21:07.771 "supported_io_types": { 00:21:07.771 "read": true, 00:21:07.771 "write": true, 00:21:07.771 "unmap": true, 00:21:07.771 "flush": true, 00:21:07.771 "reset": true, 00:21:07.771 "nvme_admin": false, 00:21:07.771 "nvme_io": false, 00:21:07.771 "nvme_io_md": false, 00:21:07.771 "write_zeroes": true, 00:21:07.771 "zcopy": true, 00:21:07.771 "get_zone_info": false, 00:21:07.771 "zone_management": false, 00:21:07.771 "zone_append": false, 00:21:07.771 "compare": false, 00:21:07.771 "compare_and_write": false, 00:21:07.771 "abort": true, 00:21:07.771 "seek_hole": false, 00:21:07.771 "seek_data": false, 00:21:07.771 "copy": true, 00:21:07.771 "nvme_iov_md": false 00:21:07.771 }, 00:21:07.771 "memory_domains": [ 00:21:07.771 { 00:21:07.771 "dma_device_id": "system", 00:21:07.771 "dma_device_type": 1 00:21:07.771 }, 00:21:07.771 { 00:21:07.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.771 "dma_device_type": 2 00:21:07.771 } 00:21:07.771 ], 00:21:07.771 "driver_specific": {} 00:21:07.771 } 00:21:07.771 ] 00:21:07.771 13:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:07.771 13:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:07.771 13:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:07.771 13:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:08.030 BaseBdev3 00:21:08.030 13:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:08.030 13:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:08.030 13:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:08.030 13:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:08.030 13:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:08.030 13:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:08.030 13:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:08.289 13:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:08.548 [ 00:21:08.549 { 00:21:08.549 "name": "BaseBdev3", 00:21:08.549 "aliases": [ 00:21:08.549 "a3dccaf7-9355-4e3e-a830-f080aaa41e9b" 00:21:08.549 ], 00:21:08.549 "product_name": "Malloc disk", 00:21:08.549 "block_size": 512, 00:21:08.549 "num_blocks": 65536, 00:21:08.549 "uuid": "a3dccaf7-9355-4e3e-a830-f080aaa41e9b", 00:21:08.549 "assigned_rate_limits": { 00:21:08.549 "rw_ios_per_sec": 0, 00:21:08.549 "rw_mbytes_per_sec": 0, 00:21:08.549 "r_mbytes_per_sec": 0, 00:21:08.549 "w_mbytes_per_sec": 0 00:21:08.549 }, 00:21:08.549 "claimed": false, 00:21:08.549 "zoned": false, 00:21:08.549 "supported_io_types": { 00:21:08.549 "read": true, 00:21:08.549 "write": true, 00:21:08.549 "unmap": true, 00:21:08.549 "flush": true, 00:21:08.549 "reset": true, 00:21:08.549 "nvme_admin": false, 00:21:08.549 "nvme_io": false, 00:21:08.549 "nvme_io_md": false, 00:21:08.549 "write_zeroes": true, 00:21:08.549 "zcopy": true, 00:21:08.549 "get_zone_info": false, 00:21:08.549 "zone_management": false, 00:21:08.549 "zone_append": false, 00:21:08.549 "compare": false, 00:21:08.549 "compare_and_write": false, 00:21:08.549 "abort": true, 00:21:08.549 "seek_hole": false, 00:21:08.549 "seek_data": false, 00:21:08.549 "copy": true, 00:21:08.549 "nvme_iov_md": false 00:21:08.549 }, 00:21:08.549 "memory_domains": [ 00:21:08.549 { 00:21:08.549 "dma_device_id": "system", 00:21:08.549 "dma_device_type": 1 00:21:08.549 }, 00:21:08.549 { 00:21:08.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.549 "dma_device_type": 2 00:21:08.549 } 00:21:08.549 ], 00:21:08.549 "driver_specific": {} 00:21:08.549 } 00:21:08.549 ] 00:21:08.549 13:20:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:08.549 13:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:08.549 13:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:08.549 13:20:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:08.808 BaseBdev4 00:21:08.808 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:08.808 13:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:08.808 13:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:08.808 13:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:08.808 13:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:08.808 13:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:08.808 13:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:09.067 13:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:09.067 [ 00:21:09.067 { 00:21:09.067 "name": "BaseBdev4", 00:21:09.067 "aliases": [ 00:21:09.067 "282da8ea-47e8-45d5-bf1d-993903797798" 00:21:09.067 ], 00:21:09.067 "product_name": "Malloc disk", 00:21:09.067 "block_size": 512, 00:21:09.067 "num_blocks": 65536, 00:21:09.067 "uuid": "282da8ea-47e8-45d5-bf1d-993903797798", 00:21:09.067 "assigned_rate_limits": { 00:21:09.067 "rw_ios_per_sec": 0, 00:21:09.067 "rw_mbytes_per_sec": 0, 00:21:09.067 "r_mbytes_per_sec": 0, 00:21:09.067 "w_mbytes_per_sec": 0 00:21:09.067 }, 00:21:09.067 "claimed": false, 00:21:09.067 "zoned": false, 00:21:09.067 "supported_io_types": { 00:21:09.067 "read": true, 00:21:09.067 "write": true, 00:21:09.067 "unmap": true, 00:21:09.067 "flush": true, 00:21:09.067 "reset": true, 00:21:09.067 "nvme_admin": false, 00:21:09.067 "nvme_io": false, 00:21:09.067 "nvme_io_md": false, 00:21:09.067 "write_zeroes": true, 00:21:09.067 "zcopy": true, 00:21:09.067 "get_zone_info": false, 00:21:09.067 "zone_management": false, 00:21:09.067 "zone_append": false, 00:21:09.067 "compare": false, 00:21:09.067 "compare_and_write": false, 00:21:09.067 "abort": true, 00:21:09.067 "seek_hole": false, 00:21:09.067 "seek_data": false, 00:21:09.067 "copy": true, 00:21:09.067 "nvme_iov_md": false 00:21:09.067 }, 00:21:09.067 "memory_domains": [ 00:21:09.067 { 00:21:09.067 "dma_device_id": "system", 00:21:09.067 "dma_device_type": 1 00:21:09.067 }, 00:21:09.067 { 00:21:09.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.067 "dma_device_type": 2 00:21:09.067 } 00:21:09.067 ], 00:21:09.067 "driver_specific": {} 00:21:09.067 } 00:21:09.067 ] 00:21:09.067 13:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:09.067 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:09.067 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:09.067 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:09.327 [2024-07-26 13:20:49.807659] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:09.327 [2024-07-26 13:20:49.807700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:09.327 [2024-07-26 13:20:49.807717] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:09.327 [2024-07-26 13:20:49.808954] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:09.327 [2024-07-26 13:20:49.808994] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:09.327 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:09.327 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:09.327 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:09.327 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:09.327 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:09.327 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:09.327 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.327 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.327 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.327 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.327 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.327 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.586 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.586 "name": "Existed_Raid", 00:21:09.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.586 "strip_size_kb": 0, 00:21:09.586 "state": "configuring", 00:21:09.586 "raid_level": "raid1", 00:21:09.586 "superblock": false, 00:21:09.586 "num_base_bdevs": 4, 00:21:09.586 "num_base_bdevs_discovered": 3, 00:21:09.586 "num_base_bdevs_operational": 4, 00:21:09.586 "base_bdevs_list": [ 00:21:09.586 { 00:21:09.586 "name": "BaseBdev1", 00:21:09.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.586 "is_configured": false, 00:21:09.586 "data_offset": 0, 00:21:09.586 "data_size": 0 00:21:09.586 }, 00:21:09.586 { 00:21:09.586 "name": "BaseBdev2", 00:21:09.586 "uuid": "d74d1a2d-1744-4d7f-9112-42b04898994e", 00:21:09.586 "is_configured": true, 00:21:09.586 "data_offset": 0, 00:21:09.586 "data_size": 65536 00:21:09.586 }, 00:21:09.586 { 00:21:09.586 "name": "BaseBdev3", 00:21:09.586 "uuid": "a3dccaf7-9355-4e3e-a830-f080aaa41e9b", 00:21:09.586 "is_configured": true, 00:21:09.586 "data_offset": 0, 00:21:09.586 "data_size": 65536 00:21:09.586 }, 00:21:09.586 { 00:21:09.586 "name": "BaseBdev4", 00:21:09.586 "uuid": "282da8ea-47e8-45d5-bf1d-993903797798", 00:21:09.586 "is_configured": true, 00:21:09.586 "data_offset": 0, 00:21:09.586 "data_size": 65536 00:21:09.586 } 00:21:09.586 ] 00:21:09.586 }' 00:21:09.586 13:20:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.586 13:20:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.155 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:10.414 [2024-07-26 13:20:50.854417] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:10.414 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:10.414 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:10.414 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:10.414 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:10.414 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:10.414 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:10.414 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.414 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.414 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.414 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.414 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.414 13:20:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:10.711 13:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.711 "name": "Existed_Raid", 00:21:10.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.711 "strip_size_kb": 0, 00:21:10.711 "state": "configuring", 00:21:10.711 "raid_level": "raid1", 00:21:10.711 "superblock": false, 00:21:10.711 "num_base_bdevs": 4, 00:21:10.711 "num_base_bdevs_discovered": 2, 00:21:10.711 "num_base_bdevs_operational": 4, 00:21:10.711 "base_bdevs_list": [ 00:21:10.711 { 00:21:10.711 "name": "BaseBdev1", 00:21:10.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.711 "is_configured": false, 00:21:10.711 "data_offset": 0, 00:21:10.711 "data_size": 0 00:21:10.711 }, 00:21:10.711 { 00:21:10.711 "name": null, 00:21:10.711 "uuid": "d74d1a2d-1744-4d7f-9112-42b04898994e", 00:21:10.711 "is_configured": false, 00:21:10.711 "data_offset": 0, 00:21:10.711 "data_size": 65536 00:21:10.711 }, 00:21:10.711 { 00:21:10.711 "name": "BaseBdev3", 00:21:10.711 "uuid": "a3dccaf7-9355-4e3e-a830-f080aaa41e9b", 00:21:10.711 "is_configured": true, 00:21:10.711 "data_offset": 0, 00:21:10.711 "data_size": 65536 00:21:10.711 }, 00:21:10.711 { 00:21:10.711 "name": "BaseBdev4", 00:21:10.711 "uuid": "282da8ea-47e8-45d5-bf1d-993903797798", 00:21:10.712 "is_configured": true, 00:21:10.712 "data_offset": 0, 00:21:10.712 "data_size": 65536 00:21:10.712 } 00:21:10.712 ] 00:21:10.712 }' 00:21:10.712 13:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.712 13:20:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.284 13:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.284 13:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:11.543 13:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:11.543 13:20:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:11.803 [2024-07-26 13:20:52.112977] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:11.803 BaseBdev1 00:21:11.803 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:11.803 13:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:11.803 13:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:11.803 13:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:11.803 13:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:11.803 13:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:11.803 13:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:12.062 [ 00:21:12.062 { 00:21:12.062 "name": "BaseBdev1", 00:21:12.062 "aliases": [ 00:21:12.062 "4dd03be3-5569-4797-a9ac-edc46b8db1ff" 00:21:12.062 ], 00:21:12.062 "product_name": "Malloc disk", 00:21:12.062 "block_size": 512, 00:21:12.062 "num_blocks": 65536, 00:21:12.062 "uuid": "4dd03be3-5569-4797-a9ac-edc46b8db1ff", 00:21:12.062 "assigned_rate_limits": { 00:21:12.062 "rw_ios_per_sec": 0, 00:21:12.062 "rw_mbytes_per_sec": 0, 00:21:12.062 "r_mbytes_per_sec": 0, 00:21:12.062 "w_mbytes_per_sec": 0 00:21:12.062 }, 00:21:12.062 "claimed": true, 00:21:12.062 "claim_type": "exclusive_write", 00:21:12.062 "zoned": false, 00:21:12.062 "supported_io_types": { 00:21:12.062 "read": true, 00:21:12.062 "write": true, 00:21:12.062 "unmap": true, 00:21:12.062 "flush": true, 00:21:12.062 "reset": true, 00:21:12.062 "nvme_admin": false, 00:21:12.062 "nvme_io": false, 00:21:12.062 "nvme_io_md": false, 00:21:12.062 "write_zeroes": true, 00:21:12.062 "zcopy": true, 00:21:12.062 "get_zone_info": false, 00:21:12.062 "zone_management": false, 00:21:12.062 "zone_append": false, 00:21:12.062 "compare": false, 00:21:12.062 "compare_and_write": false, 00:21:12.062 "abort": true, 00:21:12.062 "seek_hole": false, 00:21:12.062 "seek_data": false, 00:21:12.062 "copy": true, 00:21:12.062 "nvme_iov_md": false 00:21:12.062 }, 00:21:12.062 "memory_domains": [ 00:21:12.062 { 00:21:12.062 "dma_device_id": "system", 00:21:12.062 "dma_device_type": 1 00:21:12.062 }, 00:21:12.062 { 00:21:12.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.062 "dma_device_type": 2 00:21:12.062 } 00:21:12.062 ], 00:21:12.062 "driver_specific": {} 00:21:12.062 } 00:21:12.062 ] 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:12.062 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.063 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:12.322 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:12.322 "name": "Existed_Raid", 00:21:12.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:12.322 "strip_size_kb": 0, 00:21:12.322 "state": "configuring", 00:21:12.322 "raid_level": "raid1", 00:21:12.322 "superblock": false, 00:21:12.322 "num_base_bdevs": 4, 00:21:12.322 "num_base_bdevs_discovered": 3, 00:21:12.322 "num_base_bdevs_operational": 4, 00:21:12.322 "base_bdevs_list": [ 00:21:12.322 { 00:21:12.322 "name": "BaseBdev1", 00:21:12.322 "uuid": "4dd03be3-5569-4797-a9ac-edc46b8db1ff", 00:21:12.322 "is_configured": true, 00:21:12.322 "data_offset": 0, 00:21:12.322 "data_size": 65536 00:21:12.322 }, 00:21:12.322 { 00:21:12.322 "name": null, 00:21:12.322 "uuid": "d74d1a2d-1744-4d7f-9112-42b04898994e", 00:21:12.322 "is_configured": false, 00:21:12.322 "data_offset": 0, 00:21:12.322 "data_size": 65536 00:21:12.322 }, 00:21:12.322 { 00:21:12.322 "name": "BaseBdev3", 00:21:12.322 "uuid": "a3dccaf7-9355-4e3e-a830-f080aaa41e9b", 00:21:12.322 "is_configured": true, 00:21:12.322 "data_offset": 0, 00:21:12.322 "data_size": 65536 00:21:12.322 }, 00:21:12.322 { 00:21:12.322 "name": "BaseBdev4", 00:21:12.322 "uuid": "282da8ea-47e8-45d5-bf1d-993903797798", 00:21:12.322 "is_configured": true, 00:21:12.322 "data_offset": 0, 00:21:12.322 "data_size": 65536 00:21:12.322 } 00:21:12.322 ] 00:21:12.322 }' 00:21:12.322 13:20:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:12.322 13:20:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.904 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.904 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:13.166 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:13.166 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:13.425 [2024-07-26 13:20:53.761363] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:13.425 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:13.425 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:13.425 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:13.425 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:13.425 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:13.425 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:13.425 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:13.425 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:13.425 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:13.425 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:13.425 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.425 13:20:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.684 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:13.684 "name": "Existed_Raid", 00:21:13.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.684 "strip_size_kb": 0, 00:21:13.684 "state": "configuring", 00:21:13.684 "raid_level": "raid1", 00:21:13.684 "superblock": false, 00:21:13.684 "num_base_bdevs": 4, 00:21:13.684 "num_base_bdevs_discovered": 2, 00:21:13.684 "num_base_bdevs_operational": 4, 00:21:13.684 "base_bdevs_list": [ 00:21:13.684 { 00:21:13.684 "name": "BaseBdev1", 00:21:13.684 "uuid": "4dd03be3-5569-4797-a9ac-edc46b8db1ff", 00:21:13.684 "is_configured": true, 00:21:13.684 "data_offset": 0, 00:21:13.684 "data_size": 65536 00:21:13.684 }, 00:21:13.684 { 00:21:13.684 "name": null, 00:21:13.684 "uuid": "d74d1a2d-1744-4d7f-9112-42b04898994e", 00:21:13.684 "is_configured": false, 00:21:13.684 "data_offset": 0, 00:21:13.684 "data_size": 65536 00:21:13.684 }, 00:21:13.684 { 00:21:13.684 "name": null, 00:21:13.684 "uuid": "a3dccaf7-9355-4e3e-a830-f080aaa41e9b", 00:21:13.684 "is_configured": false, 00:21:13.684 "data_offset": 0, 00:21:13.684 "data_size": 65536 00:21:13.684 }, 00:21:13.684 { 00:21:13.684 "name": "BaseBdev4", 00:21:13.684 "uuid": "282da8ea-47e8-45d5-bf1d-993903797798", 00:21:13.684 "is_configured": true, 00:21:13.684 "data_offset": 0, 00:21:13.684 "data_size": 65536 00:21:13.684 } 00:21:13.684 ] 00:21:13.684 }' 00:21:13.684 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:13.684 13:20:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.252 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.252 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:14.252 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:14.252 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:14.512 [2024-07-26 13:20:54.976566] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:14.512 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:14.512 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:14.512 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:14.512 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:14.512 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:14.512 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:14.512 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.512 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.512 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.512 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.512 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.512 13:20:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.771 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.771 "name": "Existed_Raid", 00:21:14.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.771 "strip_size_kb": 0, 00:21:14.771 "state": "configuring", 00:21:14.771 "raid_level": "raid1", 00:21:14.771 "superblock": false, 00:21:14.771 "num_base_bdevs": 4, 00:21:14.771 "num_base_bdevs_discovered": 3, 00:21:14.771 "num_base_bdevs_operational": 4, 00:21:14.771 "base_bdevs_list": [ 00:21:14.771 { 00:21:14.771 "name": "BaseBdev1", 00:21:14.772 "uuid": "4dd03be3-5569-4797-a9ac-edc46b8db1ff", 00:21:14.772 "is_configured": true, 00:21:14.772 "data_offset": 0, 00:21:14.772 "data_size": 65536 00:21:14.772 }, 00:21:14.772 { 00:21:14.772 "name": null, 00:21:14.772 "uuid": "d74d1a2d-1744-4d7f-9112-42b04898994e", 00:21:14.772 "is_configured": false, 00:21:14.772 "data_offset": 0, 00:21:14.772 "data_size": 65536 00:21:14.772 }, 00:21:14.772 { 00:21:14.772 "name": "BaseBdev3", 00:21:14.772 "uuid": "a3dccaf7-9355-4e3e-a830-f080aaa41e9b", 00:21:14.772 "is_configured": true, 00:21:14.772 "data_offset": 0, 00:21:14.772 "data_size": 65536 00:21:14.772 }, 00:21:14.772 { 00:21:14.772 "name": "BaseBdev4", 00:21:14.772 "uuid": "282da8ea-47e8-45d5-bf1d-993903797798", 00:21:14.772 "is_configured": true, 00:21:14.772 "data_offset": 0, 00:21:14.772 "data_size": 65536 00:21:14.772 } 00:21:14.772 ] 00:21:14.772 }' 00:21:14.772 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.772 13:20:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.340 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.340 13:20:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:15.599 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:15.599 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:15.858 [2024-07-26 13:20:56.227880] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:15.858 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:15.858 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:15.858 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:15.858 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:15.858 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:15.858 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:15.858 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:15.858 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:15.858 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:15.858 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:15.858 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.858 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.117 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:16.117 "name": "Existed_Raid", 00:21:16.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.117 "strip_size_kb": 0, 00:21:16.117 "state": "configuring", 00:21:16.117 "raid_level": "raid1", 00:21:16.117 "superblock": false, 00:21:16.117 "num_base_bdevs": 4, 00:21:16.117 "num_base_bdevs_discovered": 2, 00:21:16.117 "num_base_bdevs_operational": 4, 00:21:16.117 "base_bdevs_list": [ 00:21:16.117 { 00:21:16.117 "name": null, 00:21:16.117 "uuid": "4dd03be3-5569-4797-a9ac-edc46b8db1ff", 00:21:16.117 "is_configured": false, 00:21:16.117 "data_offset": 0, 00:21:16.117 "data_size": 65536 00:21:16.117 }, 00:21:16.117 { 00:21:16.117 "name": null, 00:21:16.117 "uuid": "d74d1a2d-1744-4d7f-9112-42b04898994e", 00:21:16.117 "is_configured": false, 00:21:16.117 "data_offset": 0, 00:21:16.117 "data_size": 65536 00:21:16.117 }, 00:21:16.117 { 00:21:16.117 "name": "BaseBdev3", 00:21:16.117 "uuid": "a3dccaf7-9355-4e3e-a830-f080aaa41e9b", 00:21:16.117 "is_configured": true, 00:21:16.117 "data_offset": 0, 00:21:16.117 "data_size": 65536 00:21:16.117 }, 00:21:16.117 { 00:21:16.117 "name": "BaseBdev4", 00:21:16.117 "uuid": "282da8ea-47e8-45d5-bf1d-993903797798", 00:21:16.117 "is_configured": true, 00:21:16.117 "data_offset": 0, 00:21:16.117 "data_size": 65536 00:21:16.117 } 00:21:16.117 ] 00:21:16.117 }' 00:21:16.117 13:20:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:16.117 13:20:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.685 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.685 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:16.944 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:16.944 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:17.203 [2024-07-26 13:20:57.489251] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:17.203 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:17.203 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:17.203 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:17.203 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:17.203 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:17.203 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:17.203 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.203 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.203 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.203 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.203 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.203 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.462 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.462 "name": "Existed_Raid", 00:21:17.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.462 "strip_size_kb": 0, 00:21:17.462 "state": "configuring", 00:21:17.462 "raid_level": "raid1", 00:21:17.462 "superblock": false, 00:21:17.462 "num_base_bdevs": 4, 00:21:17.462 "num_base_bdevs_discovered": 3, 00:21:17.462 "num_base_bdevs_operational": 4, 00:21:17.462 "base_bdevs_list": [ 00:21:17.462 { 00:21:17.462 "name": null, 00:21:17.462 "uuid": "4dd03be3-5569-4797-a9ac-edc46b8db1ff", 00:21:17.462 "is_configured": false, 00:21:17.462 "data_offset": 0, 00:21:17.462 "data_size": 65536 00:21:17.462 }, 00:21:17.462 { 00:21:17.462 "name": "BaseBdev2", 00:21:17.462 "uuid": "d74d1a2d-1744-4d7f-9112-42b04898994e", 00:21:17.462 "is_configured": true, 00:21:17.462 "data_offset": 0, 00:21:17.462 "data_size": 65536 00:21:17.462 }, 00:21:17.462 { 00:21:17.462 "name": "BaseBdev3", 00:21:17.462 "uuid": "a3dccaf7-9355-4e3e-a830-f080aaa41e9b", 00:21:17.462 "is_configured": true, 00:21:17.462 "data_offset": 0, 00:21:17.462 "data_size": 65536 00:21:17.462 }, 00:21:17.462 { 00:21:17.462 "name": "BaseBdev4", 00:21:17.462 "uuid": "282da8ea-47e8-45d5-bf1d-993903797798", 00:21:17.462 "is_configured": true, 00:21:17.462 "data_offset": 0, 00:21:17.462 "data_size": 65536 00:21:17.462 } 00:21:17.462 ] 00:21:17.462 }' 00:21:17.462 13:20:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.462 13:20:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.030 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.030 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:18.030 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:18.030 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.030 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:18.289 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4dd03be3-5569-4797-a9ac-edc46b8db1ff 00:21:18.548 [2024-07-26 13:20:58.980321] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:18.548 [2024-07-26 13:20:58.980357] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1185cb0 00:21:18.548 [2024-07-26 13:20:58.980366] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:18.548 [2024-07-26 13:20:58.980548] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1187480 00:21:18.548 [2024-07-26 13:20:58.980665] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1185cb0 00:21:18.548 [2024-07-26 13:20:58.980674] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1185cb0 00:21:18.548 [2024-07-26 13:20:58.980835] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.548 NewBaseBdev 00:21:18.548 13:20:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:18.548 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:21:18.548 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:18.548 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:18.548 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:18.548 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:18.548 13:20:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:18.807 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:19.066 [ 00:21:19.066 { 00:21:19.066 "name": "NewBaseBdev", 00:21:19.066 "aliases": [ 00:21:19.066 "4dd03be3-5569-4797-a9ac-edc46b8db1ff" 00:21:19.066 ], 00:21:19.066 "product_name": "Malloc disk", 00:21:19.066 "block_size": 512, 00:21:19.066 "num_blocks": 65536, 00:21:19.066 "uuid": "4dd03be3-5569-4797-a9ac-edc46b8db1ff", 00:21:19.066 "assigned_rate_limits": { 00:21:19.066 "rw_ios_per_sec": 0, 00:21:19.066 "rw_mbytes_per_sec": 0, 00:21:19.066 "r_mbytes_per_sec": 0, 00:21:19.066 "w_mbytes_per_sec": 0 00:21:19.066 }, 00:21:19.066 "claimed": true, 00:21:19.066 "claim_type": "exclusive_write", 00:21:19.066 "zoned": false, 00:21:19.066 "supported_io_types": { 00:21:19.066 "read": true, 00:21:19.066 "write": true, 00:21:19.066 "unmap": true, 00:21:19.066 "flush": true, 00:21:19.066 "reset": true, 00:21:19.066 "nvme_admin": false, 00:21:19.066 "nvme_io": false, 00:21:19.066 "nvme_io_md": false, 00:21:19.066 "write_zeroes": true, 00:21:19.066 "zcopy": true, 00:21:19.066 "get_zone_info": false, 00:21:19.066 "zone_management": false, 00:21:19.066 "zone_append": false, 00:21:19.066 "compare": false, 00:21:19.066 "compare_and_write": false, 00:21:19.066 "abort": true, 00:21:19.066 "seek_hole": false, 00:21:19.066 "seek_data": false, 00:21:19.066 "copy": true, 00:21:19.066 "nvme_iov_md": false 00:21:19.066 }, 00:21:19.066 "memory_domains": [ 00:21:19.066 { 00:21:19.066 "dma_device_id": "system", 00:21:19.066 "dma_device_type": 1 00:21:19.066 }, 00:21:19.066 { 00:21:19.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.066 "dma_device_type": 2 00:21:19.066 } 00:21:19.066 ], 00:21:19.066 "driver_specific": {} 00:21:19.066 } 00:21:19.066 ] 00:21:19.066 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:19.066 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:19.066 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:19.066 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:19.066 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:19.066 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:19.066 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:19.066 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.066 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.066 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.066 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.066 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.067 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.067 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.067 "name": "Existed_Raid", 00:21:19.067 "uuid": "935ed66c-874c-46c4-ac34-cbf45472de65", 00:21:19.067 "strip_size_kb": 0, 00:21:19.067 "state": "online", 00:21:19.067 "raid_level": "raid1", 00:21:19.067 "superblock": false, 00:21:19.067 "num_base_bdevs": 4, 00:21:19.067 "num_base_bdevs_discovered": 4, 00:21:19.067 "num_base_bdevs_operational": 4, 00:21:19.067 "base_bdevs_list": [ 00:21:19.067 { 00:21:19.067 "name": "NewBaseBdev", 00:21:19.067 "uuid": "4dd03be3-5569-4797-a9ac-edc46b8db1ff", 00:21:19.067 "is_configured": true, 00:21:19.067 "data_offset": 0, 00:21:19.067 "data_size": 65536 00:21:19.067 }, 00:21:19.067 { 00:21:19.067 "name": "BaseBdev2", 00:21:19.067 "uuid": "d74d1a2d-1744-4d7f-9112-42b04898994e", 00:21:19.067 "is_configured": true, 00:21:19.067 "data_offset": 0, 00:21:19.067 "data_size": 65536 00:21:19.067 }, 00:21:19.067 { 00:21:19.067 "name": "BaseBdev3", 00:21:19.067 "uuid": "a3dccaf7-9355-4e3e-a830-f080aaa41e9b", 00:21:19.067 "is_configured": true, 00:21:19.067 "data_offset": 0, 00:21:19.067 "data_size": 65536 00:21:19.067 }, 00:21:19.067 { 00:21:19.067 "name": "BaseBdev4", 00:21:19.067 "uuid": "282da8ea-47e8-45d5-bf1d-993903797798", 00:21:19.067 "is_configured": true, 00:21:19.067 "data_offset": 0, 00:21:19.067 "data_size": 65536 00:21:19.067 } 00:21:19.067 ] 00:21:19.067 }' 00:21:19.067 13:20:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.067 13:20:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.635 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:19.635 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:19.635 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:19.635 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:19.635 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:19.635 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:19.635 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:19.635 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:19.894 [2024-07-26 13:21:00.280063] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:19.894 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:19.894 "name": "Existed_Raid", 00:21:19.894 "aliases": [ 00:21:19.894 "935ed66c-874c-46c4-ac34-cbf45472de65" 00:21:19.894 ], 00:21:19.894 "product_name": "Raid Volume", 00:21:19.894 "block_size": 512, 00:21:19.894 "num_blocks": 65536, 00:21:19.894 "uuid": "935ed66c-874c-46c4-ac34-cbf45472de65", 00:21:19.894 "assigned_rate_limits": { 00:21:19.894 "rw_ios_per_sec": 0, 00:21:19.894 "rw_mbytes_per_sec": 0, 00:21:19.894 "r_mbytes_per_sec": 0, 00:21:19.894 "w_mbytes_per_sec": 0 00:21:19.894 }, 00:21:19.894 "claimed": false, 00:21:19.894 "zoned": false, 00:21:19.894 "supported_io_types": { 00:21:19.894 "read": true, 00:21:19.894 "write": true, 00:21:19.894 "unmap": false, 00:21:19.894 "flush": false, 00:21:19.894 "reset": true, 00:21:19.894 "nvme_admin": false, 00:21:19.894 "nvme_io": false, 00:21:19.894 "nvme_io_md": false, 00:21:19.894 "write_zeroes": true, 00:21:19.894 "zcopy": false, 00:21:19.894 "get_zone_info": false, 00:21:19.894 "zone_management": false, 00:21:19.894 "zone_append": false, 00:21:19.894 "compare": false, 00:21:19.894 "compare_and_write": false, 00:21:19.894 "abort": false, 00:21:19.894 "seek_hole": false, 00:21:19.894 "seek_data": false, 00:21:19.894 "copy": false, 00:21:19.894 "nvme_iov_md": false 00:21:19.894 }, 00:21:19.894 "memory_domains": [ 00:21:19.894 { 00:21:19.894 "dma_device_id": "system", 00:21:19.894 "dma_device_type": 1 00:21:19.894 }, 00:21:19.894 { 00:21:19.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.894 "dma_device_type": 2 00:21:19.894 }, 00:21:19.894 { 00:21:19.894 "dma_device_id": "system", 00:21:19.895 "dma_device_type": 1 00:21:19.895 }, 00:21:19.895 { 00:21:19.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.895 "dma_device_type": 2 00:21:19.895 }, 00:21:19.895 { 00:21:19.895 "dma_device_id": "system", 00:21:19.895 "dma_device_type": 1 00:21:19.895 }, 00:21:19.895 { 00:21:19.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.895 "dma_device_type": 2 00:21:19.895 }, 00:21:19.895 { 00:21:19.895 "dma_device_id": "system", 00:21:19.895 "dma_device_type": 1 00:21:19.895 }, 00:21:19.895 { 00:21:19.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.895 "dma_device_type": 2 00:21:19.895 } 00:21:19.895 ], 00:21:19.895 "driver_specific": { 00:21:19.895 "raid": { 00:21:19.895 "uuid": "935ed66c-874c-46c4-ac34-cbf45472de65", 00:21:19.895 "strip_size_kb": 0, 00:21:19.895 "state": "online", 00:21:19.895 "raid_level": "raid1", 00:21:19.895 "superblock": false, 00:21:19.895 "num_base_bdevs": 4, 00:21:19.895 "num_base_bdevs_discovered": 4, 00:21:19.895 "num_base_bdevs_operational": 4, 00:21:19.895 "base_bdevs_list": [ 00:21:19.895 { 00:21:19.895 "name": "NewBaseBdev", 00:21:19.895 "uuid": "4dd03be3-5569-4797-a9ac-edc46b8db1ff", 00:21:19.895 "is_configured": true, 00:21:19.895 "data_offset": 0, 00:21:19.895 "data_size": 65536 00:21:19.895 }, 00:21:19.895 { 00:21:19.895 "name": "BaseBdev2", 00:21:19.895 "uuid": "d74d1a2d-1744-4d7f-9112-42b04898994e", 00:21:19.895 "is_configured": true, 00:21:19.895 "data_offset": 0, 00:21:19.895 "data_size": 65536 00:21:19.895 }, 00:21:19.895 { 00:21:19.895 "name": "BaseBdev3", 00:21:19.895 "uuid": "a3dccaf7-9355-4e3e-a830-f080aaa41e9b", 00:21:19.895 "is_configured": true, 00:21:19.895 "data_offset": 0, 00:21:19.895 "data_size": 65536 00:21:19.895 }, 00:21:19.895 { 00:21:19.895 "name": "BaseBdev4", 00:21:19.895 "uuid": "282da8ea-47e8-45d5-bf1d-993903797798", 00:21:19.895 "is_configured": true, 00:21:19.895 "data_offset": 0, 00:21:19.895 "data_size": 65536 00:21:19.895 } 00:21:19.895 ] 00:21:19.895 } 00:21:19.895 } 00:21:19.895 }' 00:21:19.895 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:19.895 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:19.895 BaseBdev2 00:21:19.895 BaseBdev3 00:21:19.895 BaseBdev4' 00:21:19.895 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:19.895 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:19.895 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:20.154 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:20.154 "name": "NewBaseBdev", 00:21:20.154 "aliases": [ 00:21:20.154 "4dd03be3-5569-4797-a9ac-edc46b8db1ff" 00:21:20.154 ], 00:21:20.154 "product_name": "Malloc disk", 00:21:20.154 "block_size": 512, 00:21:20.154 "num_blocks": 65536, 00:21:20.154 "uuid": "4dd03be3-5569-4797-a9ac-edc46b8db1ff", 00:21:20.154 "assigned_rate_limits": { 00:21:20.154 "rw_ios_per_sec": 0, 00:21:20.154 "rw_mbytes_per_sec": 0, 00:21:20.154 "r_mbytes_per_sec": 0, 00:21:20.154 "w_mbytes_per_sec": 0 00:21:20.154 }, 00:21:20.154 "claimed": true, 00:21:20.154 "claim_type": "exclusive_write", 00:21:20.154 "zoned": false, 00:21:20.154 "supported_io_types": { 00:21:20.154 "read": true, 00:21:20.154 "write": true, 00:21:20.154 "unmap": true, 00:21:20.154 "flush": true, 00:21:20.154 "reset": true, 00:21:20.154 "nvme_admin": false, 00:21:20.154 "nvme_io": false, 00:21:20.154 "nvme_io_md": false, 00:21:20.154 "write_zeroes": true, 00:21:20.154 "zcopy": true, 00:21:20.154 "get_zone_info": false, 00:21:20.154 "zone_management": false, 00:21:20.154 "zone_append": false, 00:21:20.154 "compare": false, 00:21:20.154 "compare_and_write": false, 00:21:20.154 "abort": true, 00:21:20.154 "seek_hole": false, 00:21:20.154 "seek_data": false, 00:21:20.154 "copy": true, 00:21:20.154 "nvme_iov_md": false 00:21:20.154 }, 00:21:20.154 "memory_domains": [ 00:21:20.154 { 00:21:20.154 "dma_device_id": "system", 00:21:20.154 "dma_device_type": 1 00:21:20.154 }, 00:21:20.154 { 00:21:20.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.154 "dma_device_type": 2 00:21:20.154 } 00:21:20.154 ], 00:21:20.154 "driver_specific": {} 00:21:20.154 }' 00:21:20.154 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:20.154 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:20.154 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:20.154 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:20.413 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:20.413 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:20.413 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:20.413 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:20.413 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:20.413 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:20.413 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:20.413 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:20.413 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:20.413 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:20.413 13:21:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:20.672 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:20.672 "name": "BaseBdev2", 00:21:20.672 "aliases": [ 00:21:20.672 "d74d1a2d-1744-4d7f-9112-42b04898994e" 00:21:20.672 ], 00:21:20.672 "product_name": "Malloc disk", 00:21:20.672 "block_size": 512, 00:21:20.672 "num_blocks": 65536, 00:21:20.672 "uuid": "d74d1a2d-1744-4d7f-9112-42b04898994e", 00:21:20.673 "assigned_rate_limits": { 00:21:20.673 "rw_ios_per_sec": 0, 00:21:20.673 "rw_mbytes_per_sec": 0, 00:21:20.673 "r_mbytes_per_sec": 0, 00:21:20.673 "w_mbytes_per_sec": 0 00:21:20.673 }, 00:21:20.673 "claimed": true, 00:21:20.673 "claim_type": "exclusive_write", 00:21:20.673 "zoned": false, 00:21:20.673 "supported_io_types": { 00:21:20.673 "read": true, 00:21:20.673 "write": true, 00:21:20.673 "unmap": true, 00:21:20.673 "flush": true, 00:21:20.673 "reset": true, 00:21:20.673 "nvme_admin": false, 00:21:20.673 "nvme_io": false, 00:21:20.673 "nvme_io_md": false, 00:21:20.673 "write_zeroes": true, 00:21:20.673 "zcopy": true, 00:21:20.673 "get_zone_info": false, 00:21:20.673 "zone_management": false, 00:21:20.673 "zone_append": false, 00:21:20.673 "compare": false, 00:21:20.673 "compare_and_write": false, 00:21:20.673 "abort": true, 00:21:20.673 "seek_hole": false, 00:21:20.673 "seek_data": false, 00:21:20.673 "copy": true, 00:21:20.673 "nvme_iov_md": false 00:21:20.673 }, 00:21:20.673 "memory_domains": [ 00:21:20.673 { 00:21:20.673 "dma_device_id": "system", 00:21:20.673 "dma_device_type": 1 00:21:20.673 }, 00:21:20.673 { 00:21:20.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.673 "dma_device_type": 2 00:21:20.673 } 00:21:20.673 ], 00:21:20.673 "driver_specific": {} 00:21:20.673 }' 00:21:20.673 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:20.673 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:20.673 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:20.673 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:20.932 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:20.932 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:20.932 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:20.932 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:20.932 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:20.932 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:20.932 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:20.932 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:20.932 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:20.932 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:20.932 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:21.192 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:21.192 "name": "BaseBdev3", 00:21:21.192 "aliases": [ 00:21:21.192 "a3dccaf7-9355-4e3e-a830-f080aaa41e9b" 00:21:21.192 ], 00:21:21.192 "product_name": "Malloc disk", 00:21:21.192 "block_size": 512, 00:21:21.192 "num_blocks": 65536, 00:21:21.192 "uuid": "a3dccaf7-9355-4e3e-a830-f080aaa41e9b", 00:21:21.192 "assigned_rate_limits": { 00:21:21.192 "rw_ios_per_sec": 0, 00:21:21.192 "rw_mbytes_per_sec": 0, 00:21:21.192 "r_mbytes_per_sec": 0, 00:21:21.192 "w_mbytes_per_sec": 0 00:21:21.192 }, 00:21:21.192 "claimed": true, 00:21:21.192 "claim_type": "exclusive_write", 00:21:21.192 "zoned": false, 00:21:21.192 "supported_io_types": { 00:21:21.192 "read": true, 00:21:21.192 "write": true, 00:21:21.192 "unmap": true, 00:21:21.192 "flush": true, 00:21:21.192 "reset": true, 00:21:21.192 "nvme_admin": false, 00:21:21.192 "nvme_io": false, 00:21:21.192 "nvme_io_md": false, 00:21:21.192 "write_zeroes": true, 00:21:21.192 "zcopy": true, 00:21:21.193 "get_zone_info": false, 00:21:21.193 "zone_management": false, 00:21:21.193 "zone_append": false, 00:21:21.193 "compare": false, 00:21:21.193 "compare_and_write": false, 00:21:21.193 "abort": true, 00:21:21.193 "seek_hole": false, 00:21:21.193 "seek_data": false, 00:21:21.193 "copy": true, 00:21:21.193 "nvme_iov_md": false 00:21:21.193 }, 00:21:21.193 "memory_domains": [ 00:21:21.193 { 00:21:21.193 "dma_device_id": "system", 00:21:21.193 "dma_device_type": 1 00:21:21.193 }, 00:21:21.193 { 00:21:21.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.193 "dma_device_type": 2 00:21:21.193 } 00:21:21.193 ], 00:21:21.193 "driver_specific": {} 00:21:21.193 }' 00:21:21.193 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:21.193 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:21.452 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:21.452 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:21.452 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:21.452 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:21.452 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:21.452 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:21.452 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:21.452 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:21.452 13:21:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:21.711 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:21.711 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:21.711 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:21.711 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:21.970 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:21.970 "name": "BaseBdev4", 00:21:21.970 "aliases": [ 00:21:21.970 "282da8ea-47e8-45d5-bf1d-993903797798" 00:21:21.970 ], 00:21:21.970 "product_name": "Malloc disk", 00:21:21.970 "block_size": 512, 00:21:21.970 "num_blocks": 65536, 00:21:21.970 "uuid": "282da8ea-47e8-45d5-bf1d-993903797798", 00:21:21.970 "assigned_rate_limits": { 00:21:21.970 "rw_ios_per_sec": 0, 00:21:21.970 "rw_mbytes_per_sec": 0, 00:21:21.970 "r_mbytes_per_sec": 0, 00:21:21.970 "w_mbytes_per_sec": 0 00:21:21.970 }, 00:21:21.970 "claimed": true, 00:21:21.970 "claim_type": "exclusive_write", 00:21:21.970 "zoned": false, 00:21:21.970 "supported_io_types": { 00:21:21.970 "read": true, 00:21:21.970 "write": true, 00:21:21.970 "unmap": true, 00:21:21.970 "flush": true, 00:21:21.970 "reset": true, 00:21:21.970 "nvme_admin": false, 00:21:21.970 "nvme_io": false, 00:21:21.970 "nvme_io_md": false, 00:21:21.970 "write_zeroes": true, 00:21:21.970 "zcopy": true, 00:21:21.970 "get_zone_info": false, 00:21:21.970 "zone_management": false, 00:21:21.970 "zone_append": false, 00:21:21.970 "compare": false, 00:21:21.970 "compare_and_write": false, 00:21:21.970 "abort": true, 00:21:21.970 "seek_hole": false, 00:21:21.970 "seek_data": false, 00:21:21.970 "copy": true, 00:21:21.970 "nvme_iov_md": false 00:21:21.970 }, 00:21:21.970 "memory_domains": [ 00:21:21.970 { 00:21:21.970 "dma_device_id": "system", 00:21:21.970 "dma_device_type": 1 00:21:21.970 }, 00:21:21.970 { 00:21:21.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:21.970 "dma_device_type": 2 00:21:21.971 } 00:21:21.971 ], 00:21:21.971 "driver_specific": {} 00:21:21.971 }' 00:21:21.971 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:21.971 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:21.971 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:21.971 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:21.971 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:21.971 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:21.971 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:21.971 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:22.233 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:22.233 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:22.233 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:22.233 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:22.233 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:22.492 [2024-07-26 13:21:02.798424] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:22.492 [2024-07-26 13:21:02.798451] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:22.492 [2024-07-26 13:21:02.798508] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.492 [2024-07-26 13:21:02.798762] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.492 [2024-07-26 13:21:02.798774] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1185cb0 name Existed_Raid, state offline 00:21:22.492 13:21:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 762180 00:21:22.492 13:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 762180 ']' 00:21:22.492 13:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 762180 00:21:22.492 13:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:21:22.492 13:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.492 13:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 762180 00:21:22.492 13:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:22.492 13:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:22.493 13:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 762180' 00:21:22.493 killing process with pid 762180 00:21:22.493 13:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 762180 00:21:22.493 [2024-07-26 13:21:02.875762] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:22.493 13:21:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 762180 00:21:22.493 [2024-07-26 13:21:02.908469] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:22.753 00:21:22.753 real 0m30.241s 00:21:22.753 user 0m55.493s 00:21:22.753 sys 0m5.429s 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.753 ************************************ 00:21:22.753 END TEST raid_state_function_test 00:21:22.753 ************************************ 00:21:22.753 13:21:03 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:21:22.753 13:21:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:22.753 13:21:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:22.753 13:21:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:22.753 ************************************ 00:21:22.753 START TEST raid_state_function_test_sb 00:21:22.753 ************************************ 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=768001 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 768001' 00:21:22.753 Process raid pid: 768001 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 768001 /var/tmp/spdk-raid.sock 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 768001 ']' 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:22.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.753 13:21:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.753 [2024-07-26 13:21:03.262200] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:21:22.753 [2024-07-26 13:21:03.262259] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.013 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:23.013 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.014 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:23.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.014 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:23.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.014 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:23.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.014 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:23.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.014 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:23.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.014 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:23.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.014 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:23.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.014 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:23.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:23.014 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:23.014 [2024-07-26 13:21:03.394367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.014 [2024-07-26 13:21:03.480076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.273 [2024-07-26 13:21:03.548937] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.273 [2024-07-26 13:21:03.548969] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:23.879 [2024-07-26 13:21:04.359436] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:23.879 [2024-07-26 13:21:04.359475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:23.879 [2024-07-26 13:21:04.359485] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:23.879 [2024-07-26 13:21:04.359496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:23.879 [2024-07-26 13:21:04.359504] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:23.879 [2024-07-26 13:21:04.359518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:23.879 [2024-07-26 13:21:04.359526] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:23.879 [2024-07-26 13:21:04.359536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.879 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:24.139 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:24.139 "name": "Existed_Raid", 00:21:24.139 "uuid": "5a551e88-a986-4810-8f6b-d64c2b217b12", 00:21:24.139 "strip_size_kb": 0, 00:21:24.139 "state": "configuring", 00:21:24.139 "raid_level": "raid1", 00:21:24.139 "superblock": true, 00:21:24.139 "num_base_bdevs": 4, 00:21:24.139 "num_base_bdevs_discovered": 0, 00:21:24.139 "num_base_bdevs_operational": 4, 00:21:24.139 "base_bdevs_list": [ 00:21:24.139 { 00:21:24.139 "name": "BaseBdev1", 00:21:24.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.139 "is_configured": false, 00:21:24.139 "data_offset": 0, 00:21:24.139 "data_size": 0 00:21:24.139 }, 00:21:24.139 { 00:21:24.139 "name": "BaseBdev2", 00:21:24.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.139 "is_configured": false, 00:21:24.139 "data_offset": 0, 00:21:24.139 "data_size": 0 00:21:24.139 }, 00:21:24.139 { 00:21:24.139 "name": "BaseBdev3", 00:21:24.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.139 "is_configured": false, 00:21:24.139 "data_offset": 0, 00:21:24.139 "data_size": 0 00:21:24.139 }, 00:21:24.139 { 00:21:24.139 "name": "BaseBdev4", 00:21:24.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.139 "is_configured": false, 00:21:24.139 "data_offset": 0, 00:21:24.139 "data_size": 0 00:21:24.139 } 00:21:24.139 ] 00:21:24.139 }' 00:21:24.139 13:21:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:24.139 13:21:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.708 13:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:24.968 [2024-07-26 13:21:05.394009] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:24.968 [2024-07-26 13:21:05.394040] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1137f60 name Existed_Raid, state configuring 00:21:24.968 13:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:25.228 [2024-07-26 13:21:05.618622] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:25.228 [2024-07-26 13:21:05.618649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:25.228 [2024-07-26 13:21:05.618658] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:25.228 [2024-07-26 13:21:05.618673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:25.228 [2024-07-26 13:21:05.618681] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:25.228 [2024-07-26 13:21:05.618691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:25.228 [2024-07-26 13:21:05.618699] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:25.228 [2024-07-26 13:21:05.618709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:25.228 13:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:25.488 [2024-07-26 13:21:05.852736] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.488 BaseBdev1 00:21:25.488 13:21:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:25.488 13:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:25.488 13:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:25.488 13:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:25.488 13:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:25.488 13:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:25.488 13:21:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:25.747 13:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:26.007 [ 00:21:26.007 { 00:21:26.007 "name": "BaseBdev1", 00:21:26.007 "aliases": [ 00:21:26.007 "8256025f-7731-4bb9-bb5c-b460182e5693" 00:21:26.007 ], 00:21:26.007 "product_name": "Malloc disk", 00:21:26.007 "block_size": 512, 00:21:26.007 "num_blocks": 65536, 00:21:26.007 "uuid": "8256025f-7731-4bb9-bb5c-b460182e5693", 00:21:26.007 "assigned_rate_limits": { 00:21:26.007 "rw_ios_per_sec": 0, 00:21:26.007 "rw_mbytes_per_sec": 0, 00:21:26.007 "r_mbytes_per_sec": 0, 00:21:26.007 "w_mbytes_per_sec": 0 00:21:26.007 }, 00:21:26.007 "claimed": true, 00:21:26.007 "claim_type": "exclusive_write", 00:21:26.007 "zoned": false, 00:21:26.007 "supported_io_types": { 00:21:26.007 "read": true, 00:21:26.007 "write": true, 00:21:26.007 "unmap": true, 00:21:26.007 "flush": true, 00:21:26.007 "reset": true, 00:21:26.007 "nvme_admin": false, 00:21:26.007 "nvme_io": false, 00:21:26.007 "nvme_io_md": false, 00:21:26.007 "write_zeroes": true, 00:21:26.007 "zcopy": true, 00:21:26.007 "get_zone_info": false, 00:21:26.007 "zone_management": false, 00:21:26.007 "zone_append": false, 00:21:26.007 "compare": false, 00:21:26.007 "compare_and_write": false, 00:21:26.007 "abort": true, 00:21:26.007 "seek_hole": false, 00:21:26.007 "seek_data": false, 00:21:26.007 "copy": true, 00:21:26.007 "nvme_iov_md": false 00:21:26.007 }, 00:21:26.007 "memory_domains": [ 00:21:26.007 { 00:21:26.007 "dma_device_id": "system", 00:21:26.007 "dma_device_type": 1 00:21:26.007 }, 00:21:26.007 { 00:21:26.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.007 "dma_device_type": 2 00:21:26.007 } 00:21:26.007 ], 00:21:26.007 "driver_specific": {} 00:21:26.008 } 00:21:26.008 ] 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.008 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.267 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:26.267 "name": "Existed_Raid", 00:21:26.267 "uuid": "be2672d2-31b6-4848-baa6-7b812efd4972", 00:21:26.267 "strip_size_kb": 0, 00:21:26.267 "state": "configuring", 00:21:26.267 "raid_level": "raid1", 00:21:26.267 "superblock": true, 00:21:26.267 "num_base_bdevs": 4, 00:21:26.267 "num_base_bdevs_discovered": 1, 00:21:26.267 "num_base_bdevs_operational": 4, 00:21:26.267 "base_bdevs_list": [ 00:21:26.267 { 00:21:26.267 "name": "BaseBdev1", 00:21:26.267 "uuid": "8256025f-7731-4bb9-bb5c-b460182e5693", 00:21:26.267 "is_configured": true, 00:21:26.267 "data_offset": 2048, 00:21:26.267 "data_size": 63488 00:21:26.267 }, 00:21:26.267 { 00:21:26.267 "name": "BaseBdev2", 00:21:26.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.267 "is_configured": false, 00:21:26.267 "data_offset": 0, 00:21:26.267 "data_size": 0 00:21:26.267 }, 00:21:26.267 { 00:21:26.267 "name": "BaseBdev3", 00:21:26.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.267 "is_configured": false, 00:21:26.267 "data_offset": 0, 00:21:26.267 "data_size": 0 00:21:26.267 }, 00:21:26.267 { 00:21:26.267 "name": "BaseBdev4", 00:21:26.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.267 "is_configured": false, 00:21:26.267 "data_offset": 0, 00:21:26.267 "data_size": 0 00:21:26.267 } 00:21:26.267 ] 00:21:26.267 }' 00:21:26.267 13:21:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:26.267 13:21:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.836 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:26.836 [2024-07-26 13:21:07.320593] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:26.836 [2024-07-26 13:21:07.320630] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11377d0 name Existed_Raid, state configuring 00:21:26.836 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:27.095 [2024-07-26 13:21:07.549230] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:27.095 [2024-07-26 13:21:07.550610] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:27.095 [2024-07-26 13:21:07.550642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:27.095 [2024-07-26 13:21:07.550652] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:27.095 [2024-07-26 13:21:07.550662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:27.095 [2024-07-26 13:21:07.550670] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:27.095 [2024-07-26 13:21:07.550680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:27.095 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:27.095 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:27.095 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:27.095 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:27.095 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:27.095 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:27.096 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:27.096 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:27.096 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.096 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.096 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.096 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.096 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.096 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.355 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.355 "name": "Existed_Raid", 00:21:27.355 "uuid": "3c06706a-32fd-448b-b25f-ca40def78014", 00:21:27.355 "strip_size_kb": 0, 00:21:27.355 "state": "configuring", 00:21:27.355 "raid_level": "raid1", 00:21:27.355 "superblock": true, 00:21:27.355 "num_base_bdevs": 4, 00:21:27.355 "num_base_bdevs_discovered": 1, 00:21:27.355 "num_base_bdevs_operational": 4, 00:21:27.355 "base_bdevs_list": [ 00:21:27.355 { 00:21:27.355 "name": "BaseBdev1", 00:21:27.355 "uuid": "8256025f-7731-4bb9-bb5c-b460182e5693", 00:21:27.355 "is_configured": true, 00:21:27.355 "data_offset": 2048, 00:21:27.355 "data_size": 63488 00:21:27.355 }, 00:21:27.355 { 00:21:27.355 "name": "BaseBdev2", 00:21:27.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.355 "is_configured": false, 00:21:27.355 "data_offset": 0, 00:21:27.355 "data_size": 0 00:21:27.355 }, 00:21:27.355 { 00:21:27.355 "name": "BaseBdev3", 00:21:27.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.355 "is_configured": false, 00:21:27.355 "data_offset": 0, 00:21:27.355 "data_size": 0 00:21:27.355 }, 00:21:27.355 { 00:21:27.355 "name": "BaseBdev4", 00:21:27.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.355 "is_configured": false, 00:21:27.355 "data_offset": 0, 00:21:27.355 "data_size": 0 00:21:27.355 } 00:21:27.355 ] 00:21:27.355 }' 00:21:27.355 13:21:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.355 13:21:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.924 13:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:28.183 [2024-07-26 13:21:08.579109] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.183 BaseBdev2 00:21:28.183 13:21:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:28.183 13:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:28.183 13:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:28.183 13:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:28.183 13:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:28.183 13:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:28.183 13:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:28.443 13:21:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:28.702 [ 00:21:28.702 { 00:21:28.702 "name": "BaseBdev2", 00:21:28.702 "aliases": [ 00:21:28.702 "389697bf-8bca-4445-bac7-facc6ba598d5" 00:21:28.702 ], 00:21:28.702 "product_name": "Malloc disk", 00:21:28.702 "block_size": 512, 00:21:28.702 "num_blocks": 65536, 00:21:28.702 "uuid": "389697bf-8bca-4445-bac7-facc6ba598d5", 00:21:28.702 "assigned_rate_limits": { 00:21:28.702 "rw_ios_per_sec": 0, 00:21:28.702 "rw_mbytes_per_sec": 0, 00:21:28.702 "r_mbytes_per_sec": 0, 00:21:28.702 "w_mbytes_per_sec": 0 00:21:28.702 }, 00:21:28.702 "claimed": true, 00:21:28.702 "claim_type": "exclusive_write", 00:21:28.702 "zoned": false, 00:21:28.702 "supported_io_types": { 00:21:28.702 "read": true, 00:21:28.702 "write": true, 00:21:28.702 "unmap": true, 00:21:28.702 "flush": true, 00:21:28.702 "reset": true, 00:21:28.702 "nvme_admin": false, 00:21:28.702 "nvme_io": false, 00:21:28.702 "nvme_io_md": false, 00:21:28.702 "write_zeroes": true, 00:21:28.702 "zcopy": true, 00:21:28.702 "get_zone_info": false, 00:21:28.702 "zone_management": false, 00:21:28.702 "zone_append": false, 00:21:28.702 "compare": false, 00:21:28.702 "compare_and_write": false, 00:21:28.702 "abort": true, 00:21:28.702 "seek_hole": false, 00:21:28.702 "seek_data": false, 00:21:28.702 "copy": true, 00:21:28.702 "nvme_iov_md": false 00:21:28.702 }, 00:21:28.702 "memory_domains": [ 00:21:28.702 { 00:21:28.702 "dma_device_id": "system", 00:21:28.702 "dma_device_type": 1 00:21:28.703 }, 00:21:28.703 { 00:21:28.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:28.703 "dma_device_type": 2 00:21:28.703 } 00:21:28.703 ], 00:21:28.703 "driver_specific": {} 00:21:28.703 } 00:21:28.703 ] 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.703 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:28.962 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:28.962 "name": "Existed_Raid", 00:21:28.962 "uuid": "3c06706a-32fd-448b-b25f-ca40def78014", 00:21:28.962 "strip_size_kb": 0, 00:21:28.962 "state": "configuring", 00:21:28.962 "raid_level": "raid1", 00:21:28.962 "superblock": true, 00:21:28.962 "num_base_bdevs": 4, 00:21:28.962 "num_base_bdevs_discovered": 2, 00:21:28.962 "num_base_bdevs_operational": 4, 00:21:28.962 "base_bdevs_list": [ 00:21:28.962 { 00:21:28.962 "name": "BaseBdev1", 00:21:28.962 "uuid": "8256025f-7731-4bb9-bb5c-b460182e5693", 00:21:28.962 "is_configured": true, 00:21:28.962 "data_offset": 2048, 00:21:28.962 "data_size": 63488 00:21:28.962 }, 00:21:28.962 { 00:21:28.962 "name": "BaseBdev2", 00:21:28.962 "uuid": "389697bf-8bca-4445-bac7-facc6ba598d5", 00:21:28.962 "is_configured": true, 00:21:28.962 "data_offset": 2048, 00:21:28.962 "data_size": 63488 00:21:28.962 }, 00:21:28.962 { 00:21:28.962 "name": "BaseBdev3", 00:21:28.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.962 "is_configured": false, 00:21:28.962 "data_offset": 0, 00:21:28.962 "data_size": 0 00:21:28.962 }, 00:21:28.962 { 00:21:28.962 "name": "BaseBdev4", 00:21:28.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.962 "is_configured": false, 00:21:28.962 "data_offset": 0, 00:21:28.962 "data_size": 0 00:21:28.962 } 00:21:28.962 ] 00:21:28.962 }' 00:21:28.962 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:28.962 13:21:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.531 13:21:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:29.791 [2024-07-26 13:21:10.066182] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:29.791 BaseBdev3 00:21:29.791 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:29.791 13:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:29.791 13:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:29.791 13:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:29.791 13:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:29.791 13:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:29.791 13:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:29.791 13:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:30.050 [ 00:21:30.050 { 00:21:30.050 "name": "BaseBdev3", 00:21:30.050 "aliases": [ 00:21:30.050 "fdd5fcc6-5a5b-40a0-b95f-fcb0b2979abb" 00:21:30.050 ], 00:21:30.050 "product_name": "Malloc disk", 00:21:30.050 "block_size": 512, 00:21:30.050 "num_blocks": 65536, 00:21:30.050 "uuid": "fdd5fcc6-5a5b-40a0-b95f-fcb0b2979abb", 00:21:30.050 "assigned_rate_limits": { 00:21:30.050 "rw_ios_per_sec": 0, 00:21:30.050 "rw_mbytes_per_sec": 0, 00:21:30.050 "r_mbytes_per_sec": 0, 00:21:30.050 "w_mbytes_per_sec": 0 00:21:30.050 }, 00:21:30.050 "claimed": true, 00:21:30.050 "claim_type": "exclusive_write", 00:21:30.050 "zoned": false, 00:21:30.050 "supported_io_types": { 00:21:30.050 "read": true, 00:21:30.050 "write": true, 00:21:30.050 "unmap": true, 00:21:30.050 "flush": true, 00:21:30.050 "reset": true, 00:21:30.050 "nvme_admin": false, 00:21:30.050 "nvme_io": false, 00:21:30.050 "nvme_io_md": false, 00:21:30.050 "write_zeroes": true, 00:21:30.050 "zcopy": true, 00:21:30.051 "get_zone_info": false, 00:21:30.051 "zone_management": false, 00:21:30.051 "zone_append": false, 00:21:30.051 "compare": false, 00:21:30.051 "compare_and_write": false, 00:21:30.051 "abort": true, 00:21:30.051 "seek_hole": false, 00:21:30.051 "seek_data": false, 00:21:30.051 "copy": true, 00:21:30.051 "nvme_iov_md": false 00:21:30.051 }, 00:21:30.051 "memory_domains": [ 00:21:30.051 { 00:21:30.051 "dma_device_id": "system", 00:21:30.051 "dma_device_type": 1 00:21:30.051 }, 00:21:30.051 { 00:21:30.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:30.051 "dma_device_type": 2 00:21:30.051 } 00:21:30.051 ], 00:21:30.051 "driver_specific": {} 00:21:30.051 } 00:21:30.051 ] 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.051 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:30.310 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.310 "name": "Existed_Raid", 00:21:30.310 "uuid": "3c06706a-32fd-448b-b25f-ca40def78014", 00:21:30.310 "strip_size_kb": 0, 00:21:30.310 "state": "configuring", 00:21:30.310 "raid_level": "raid1", 00:21:30.310 "superblock": true, 00:21:30.310 "num_base_bdevs": 4, 00:21:30.310 "num_base_bdevs_discovered": 3, 00:21:30.310 "num_base_bdevs_operational": 4, 00:21:30.310 "base_bdevs_list": [ 00:21:30.310 { 00:21:30.310 "name": "BaseBdev1", 00:21:30.310 "uuid": "8256025f-7731-4bb9-bb5c-b460182e5693", 00:21:30.310 "is_configured": true, 00:21:30.310 "data_offset": 2048, 00:21:30.310 "data_size": 63488 00:21:30.310 }, 00:21:30.310 { 00:21:30.310 "name": "BaseBdev2", 00:21:30.310 "uuid": "389697bf-8bca-4445-bac7-facc6ba598d5", 00:21:30.310 "is_configured": true, 00:21:30.310 "data_offset": 2048, 00:21:30.310 "data_size": 63488 00:21:30.310 }, 00:21:30.310 { 00:21:30.310 "name": "BaseBdev3", 00:21:30.310 "uuid": "fdd5fcc6-5a5b-40a0-b95f-fcb0b2979abb", 00:21:30.310 "is_configured": true, 00:21:30.310 "data_offset": 2048, 00:21:30.310 "data_size": 63488 00:21:30.310 }, 00:21:30.310 { 00:21:30.310 "name": "BaseBdev4", 00:21:30.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.310 "is_configured": false, 00:21:30.310 "data_offset": 0, 00:21:30.310 "data_size": 0 00:21:30.310 } 00:21:30.310 ] 00:21:30.310 }' 00:21:30.310 13:21:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.310 13:21:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.879 13:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:31.139 [2024-07-26 13:21:11.565248] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:31.139 [2024-07-26 13:21:11.565399] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1138840 00:21:31.139 [2024-07-26 13:21:11.565412] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:31.139 [2024-07-26 13:21:11.565574] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1138480 00:21:31.139 [2024-07-26 13:21:11.565694] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1138840 00:21:31.139 [2024-07-26 13:21:11.565704] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1138840 00:21:31.139 [2024-07-26 13:21:11.565788] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:31.139 BaseBdev4 00:21:31.139 13:21:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:31.139 13:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:31.139 13:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:31.139 13:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:31.139 13:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:31.139 13:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:31.139 13:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:31.398 13:21:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:31.657 [ 00:21:31.657 { 00:21:31.657 "name": "BaseBdev4", 00:21:31.657 "aliases": [ 00:21:31.657 "3d337d06-ce47-46a4-aa8f-713ab251c4f6" 00:21:31.657 ], 00:21:31.657 "product_name": "Malloc disk", 00:21:31.657 "block_size": 512, 00:21:31.657 "num_blocks": 65536, 00:21:31.657 "uuid": "3d337d06-ce47-46a4-aa8f-713ab251c4f6", 00:21:31.657 "assigned_rate_limits": { 00:21:31.657 "rw_ios_per_sec": 0, 00:21:31.657 "rw_mbytes_per_sec": 0, 00:21:31.657 "r_mbytes_per_sec": 0, 00:21:31.657 "w_mbytes_per_sec": 0 00:21:31.657 }, 00:21:31.657 "claimed": true, 00:21:31.657 "claim_type": "exclusive_write", 00:21:31.657 "zoned": false, 00:21:31.657 "supported_io_types": { 00:21:31.657 "read": true, 00:21:31.657 "write": true, 00:21:31.657 "unmap": true, 00:21:31.657 "flush": true, 00:21:31.657 "reset": true, 00:21:31.657 "nvme_admin": false, 00:21:31.657 "nvme_io": false, 00:21:31.657 "nvme_io_md": false, 00:21:31.657 "write_zeroes": true, 00:21:31.657 "zcopy": true, 00:21:31.657 "get_zone_info": false, 00:21:31.657 "zone_management": false, 00:21:31.657 "zone_append": false, 00:21:31.657 "compare": false, 00:21:31.657 "compare_and_write": false, 00:21:31.657 "abort": true, 00:21:31.657 "seek_hole": false, 00:21:31.657 "seek_data": false, 00:21:31.657 "copy": true, 00:21:31.657 "nvme_iov_md": false 00:21:31.657 }, 00:21:31.657 "memory_domains": [ 00:21:31.657 { 00:21:31.657 "dma_device_id": "system", 00:21:31.657 "dma_device_type": 1 00:21:31.657 }, 00:21:31.657 { 00:21:31.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.657 "dma_device_type": 2 00:21:31.657 } 00:21:31.657 ], 00:21:31.657 "driver_specific": {} 00:21:31.657 } 00:21:31.657 ] 00:21:31.657 13:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:31.657 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:31.657 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:31.657 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:31.657 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:31.658 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:31.658 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:31.658 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:31.658 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:31.658 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:31.658 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:31.658 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:31.658 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:31.658 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.658 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.917 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.917 "name": "Existed_Raid", 00:21:31.917 "uuid": "3c06706a-32fd-448b-b25f-ca40def78014", 00:21:31.917 "strip_size_kb": 0, 00:21:31.917 "state": "online", 00:21:31.917 "raid_level": "raid1", 00:21:31.917 "superblock": true, 00:21:31.917 "num_base_bdevs": 4, 00:21:31.917 "num_base_bdevs_discovered": 4, 00:21:31.917 "num_base_bdevs_operational": 4, 00:21:31.917 "base_bdevs_list": [ 00:21:31.917 { 00:21:31.917 "name": "BaseBdev1", 00:21:31.917 "uuid": "8256025f-7731-4bb9-bb5c-b460182e5693", 00:21:31.917 "is_configured": true, 00:21:31.917 "data_offset": 2048, 00:21:31.917 "data_size": 63488 00:21:31.917 }, 00:21:31.917 { 00:21:31.917 "name": "BaseBdev2", 00:21:31.917 "uuid": "389697bf-8bca-4445-bac7-facc6ba598d5", 00:21:31.917 "is_configured": true, 00:21:31.917 "data_offset": 2048, 00:21:31.917 "data_size": 63488 00:21:31.917 }, 00:21:31.917 { 00:21:31.917 "name": "BaseBdev3", 00:21:31.917 "uuid": "fdd5fcc6-5a5b-40a0-b95f-fcb0b2979abb", 00:21:31.917 "is_configured": true, 00:21:31.917 "data_offset": 2048, 00:21:31.917 "data_size": 63488 00:21:31.917 }, 00:21:31.917 { 00:21:31.917 "name": "BaseBdev4", 00:21:31.917 "uuid": "3d337d06-ce47-46a4-aa8f-713ab251c4f6", 00:21:31.917 "is_configured": true, 00:21:31.917 "data_offset": 2048, 00:21:31.917 "data_size": 63488 00:21:31.917 } 00:21:31.917 ] 00:21:31.917 }' 00:21:31.917 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.917 13:21:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.483 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:32.483 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:32.483 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:32.483 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:32.483 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:32.483 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:32.483 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:32.483 13:21:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:32.742 [2024-07-26 13:21:13.029401] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:32.742 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:32.742 "name": "Existed_Raid", 00:21:32.742 "aliases": [ 00:21:32.742 "3c06706a-32fd-448b-b25f-ca40def78014" 00:21:32.742 ], 00:21:32.742 "product_name": "Raid Volume", 00:21:32.742 "block_size": 512, 00:21:32.742 "num_blocks": 63488, 00:21:32.742 "uuid": "3c06706a-32fd-448b-b25f-ca40def78014", 00:21:32.742 "assigned_rate_limits": { 00:21:32.742 "rw_ios_per_sec": 0, 00:21:32.742 "rw_mbytes_per_sec": 0, 00:21:32.742 "r_mbytes_per_sec": 0, 00:21:32.742 "w_mbytes_per_sec": 0 00:21:32.742 }, 00:21:32.742 "claimed": false, 00:21:32.742 "zoned": false, 00:21:32.742 "supported_io_types": { 00:21:32.742 "read": true, 00:21:32.742 "write": true, 00:21:32.742 "unmap": false, 00:21:32.742 "flush": false, 00:21:32.742 "reset": true, 00:21:32.742 "nvme_admin": false, 00:21:32.742 "nvme_io": false, 00:21:32.742 "nvme_io_md": false, 00:21:32.742 "write_zeroes": true, 00:21:32.742 "zcopy": false, 00:21:32.742 "get_zone_info": false, 00:21:32.742 "zone_management": false, 00:21:32.742 "zone_append": false, 00:21:32.742 "compare": false, 00:21:32.742 "compare_and_write": false, 00:21:32.742 "abort": false, 00:21:32.742 "seek_hole": false, 00:21:32.742 "seek_data": false, 00:21:32.742 "copy": false, 00:21:32.742 "nvme_iov_md": false 00:21:32.742 }, 00:21:32.742 "memory_domains": [ 00:21:32.742 { 00:21:32.742 "dma_device_id": "system", 00:21:32.742 "dma_device_type": 1 00:21:32.742 }, 00:21:32.742 { 00:21:32.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.742 "dma_device_type": 2 00:21:32.742 }, 00:21:32.742 { 00:21:32.742 "dma_device_id": "system", 00:21:32.742 "dma_device_type": 1 00:21:32.742 }, 00:21:32.742 { 00:21:32.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.742 "dma_device_type": 2 00:21:32.743 }, 00:21:32.743 { 00:21:32.743 "dma_device_id": "system", 00:21:32.743 "dma_device_type": 1 00:21:32.743 }, 00:21:32.743 { 00:21:32.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.743 "dma_device_type": 2 00:21:32.743 }, 00:21:32.743 { 00:21:32.743 "dma_device_id": "system", 00:21:32.743 "dma_device_type": 1 00:21:32.743 }, 00:21:32.743 { 00:21:32.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.743 "dma_device_type": 2 00:21:32.743 } 00:21:32.743 ], 00:21:32.743 "driver_specific": { 00:21:32.743 "raid": { 00:21:32.743 "uuid": "3c06706a-32fd-448b-b25f-ca40def78014", 00:21:32.743 "strip_size_kb": 0, 00:21:32.743 "state": "online", 00:21:32.743 "raid_level": "raid1", 00:21:32.743 "superblock": true, 00:21:32.743 "num_base_bdevs": 4, 00:21:32.743 "num_base_bdevs_discovered": 4, 00:21:32.743 "num_base_bdevs_operational": 4, 00:21:32.743 "base_bdevs_list": [ 00:21:32.743 { 00:21:32.743 "name": "BaseBdev1", 00:21:32.743 "uuid": "8256025f-7731-4bb9-bb5c-b460182e5693", 00:21:32.743 "is_configured": true, 00:21:32.743 "data_offset": 2048, 00:21:32.743 "data_size": 63488 00:21:32.743 }, 00:21:32.743 { 00:21:32.743 "name": "BaseBdev2", 00:21:32.743 "uuid": "389697bf-8bca-4445-bac7-facc6ba598d5", 00:21:32.743 "is_configured": true, 00:21:32.743 "data_offset": 2048, 00:21:32.743 "data_size": 63488 00:21:32.743 }, 00:21:32.743 { 00:21:32.743 "name": "BaseBdev3", 00:21:32.743 "uuid": "fdd5fcc6-5a5b-40a0-b95f-fcb0b2979abb", 00:21:32.743 "is_configured": true, 00:21:32.743 "data_offset": 2048, 00:21:32.743 "data_size": 63488 00:21:32.743 }, 00:21:32.743 { 00:21:32.743 "name": "BaseBdev4", 00:21:32.743 "uuid": "3d337d06-ce47-46a4-aa8f-713ab251c4f6", 00:21:32.743 "is_configured": true, 00:21:32.743 "data_offset": 2048, 00:21:32.743 "data_size": 63488 00:21:32.743 } 00:21:32.743 ] 00:21:32.743 } 00:21:32.743 } 00:21:32.743 }' 00:21:32.743 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:32.743 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:32.743 BaseBdev2 00:21:32.743 BaseBdev3 00:21:32.743 BaseBdev4' 00:21:32.743 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:32.743 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:32.743 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:33.002 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:33.002 "name": "BaseBdev1", 00:21:33.002 "aliases": [ 00:21:33.002 "8256025f-7731-4bb9-bb5c-b460182e5693" 00:21:33.002 ], 00:21:33.002 "product_name": "Malloc disk", 00:21:33.002 "block_size": 512, 00:21:33.002 "num_blocks": 65536, 00:21:33.002 "uuid": "8256025f-7731-4bb9-bb5c-b460182e5693", 00:21:33.002 "assigned_rate_limits": { 00:21:33.002 "rw_ios_per_sec": 0, 00:21:33.002 "rw_mbytes_per_sec": 0, 00:21:33.002 "r_mbytes_per_sec": 0, 00:21:33.002 "w_mbytes_per_sec": 0 00:21:33.002 }, 00:21:33.002 "claimed": true, 00:21:33.002 "claim_type": "exclusive_write", 00:21:33.002 "zoned": false, 00:21:33.002 "supported_io_types": { 00:21:33.002 "read": true, 00:21:33.002 "write": true, 00:21:33.002 "unmap": true, 00:21:33.002 "flush": true, 00:21:33.002 "reset": true, 00:21:33.002 "nvme_admin": false, 00:21:33.002 "nvme_io": false, 00:21:33.002 "nvme_io_md": false, 00:21:33.002 "write_zeroes": true, 00:21:33.002 "zcopy": true, 00:21:33.002 "get_zone_info": false, 00:21:33.002 "zone_management": false, 00:21:33.002 "zone_append": false, 00:21:33.002 "compare": false, 00:21:33.002 "compare_and_write": false, 00:21:33.002 "abort": true, 00:21:33.002 "seek_hole": false, 00:21:33.002 "seek_data": false, 00:21:33.002 "copy": true, 00:21:33.002 "nvme_iov_md": false 00:21:33.002 }, 00:21:33.002 "memory_domains": [ 00:21:33.002 { 00:21:33.002 "dma_device_id": "system", 00:21:33.002 "dma_device_type": 1 00:21:33.002 }, 00:21:33.002 { 00:21:33.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.002 "dma_device_type": 2 00:21:33.002 } 00:21:33.002 ], 00:21:33.002 "driver_specific": {} 00:21:33.002 }' 00:21:33.002 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.002 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.002 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:33.002 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.002 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.002 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:33.002 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.262 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.262 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:33.262 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.262 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.262 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:33.262 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:33.262 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:33.262 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:33.522 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:33.522 "name": "BaseBdev2", 00:21:33.522 "aliases": [ 00:21:33.522 "389697bf-8bca-4445-bac7-facc6ba598d5" 00:21:33.522 ], 00:21:33.522 "product_name": "Malloc disk", 00:21:33.522 "block_size": 512, 00:21:33.522 "num_blocks": 65536, 00:21:33.522 "uuid": "389697bf-8bca-4445-bac7-facc6ba598d5", 00:21:33.522 "assigned_rate_limits": { 00:21:33.522 "rw_ios_per_sec": 0, 00:21:33.522 "rw_mbytes_per_sec": 0, 00:21:33.522 "r_mbytes_per_sec": 0, 00:21:33.522 "w_mbytes_per_sec": 0 00:21:33.522 }, 00:21:33.522 "claimed": true, 00:21:33.522 "claim_type": "exclusive_write", 00:21:33.522 "zoned": false, 00:21:33.522 "supported_io_types": { 00:21:33.522 "read": true, 00:21:33.522 "write": true, 00:21:33.522 "unmap": true, 00:21:33.522 "flush": true, 00:21:33.522 "reset": true, 00:21:33.522 "nvme_admin": false, 00:21:33.522 "nvme_io": false, 00:21:33.522 "nvme_io_md": false, 00:21:33.522 "write_zeroes": true, 00:21:33.522 "zcopy": true, 00:21:33.522 "get_zone_info": false, 00:21:33.522 "zone_management": false, 00:21:33.522 "zone_append": false, 00:21:33.522 "compare": false, 00:21:33.522 "compare_and_write": false, 00:21:33.522 "abort": true, 00:21:33.522 "seek_hole": false, 00:21:33.522 "seek_data": false, 00:21:33.522 "copy": true, 00:21:33.522 "nvme_iov_md": false 00:21:33.522 }, 00:21:33.522 "memory_domains": [ 00:21:33.522 { 00:21:33.522 "dma_device_id": "system", 00:21:33.522 "dma_device_type": 1 00:21:33.522 }, 00:21:33.522 { 00:21:33.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.522 "dma_device_type": 2 00:21:33.522 } 00:21:33.522 ], 00:21:33.522 "driver_specific": {} 00:21:33.522 }' 00:21:33.522 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.522 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.522 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:33.522 13:21:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.522 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.781 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:33.781 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.781 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.781 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:33.781 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.781 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.781 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:33.781 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:33.781 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:33.781 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:34.040 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:34.040 "name": "BaseBdev3", 00:21:34.040 "aliases": [ 00:21:34.040 "fdd5fcc6-5a5b-40a0-b95f-fcb0b2979abb" 00:21:34.040 ], 00:21:34.040 "product_name": "Malloc disk", 00:21:34.040 "block_size": 512, 00:21:34.040 "num_blocks": 65536, 00:21:34.040 "uuid": "fdd5fcc6-5a5b-40a0-b95f-fcb0b2979abb", 00:21:34.040 "assigned_rate_limits": { 00:21:34.040 "rw_ios_per_sec": 0, 00:21:34.040 "rw_mbytes_per_sec": 0, 00:21:34.040 "r_mbytes_per_sec": 0, 00:21:34.040 "w_mbytes_per_sec": 0 00:21:34.040 }, 00:21:34.040 "claimed": true, 00:21:34.040 "claim_type": "exclusive_write", 00:21:34.040 "zoned": false, 00:21:34.040 "supported_io_types": { 00:21:34.040 "read": true, 00:21:34.040 "write": true, 00:21:34.040 "unmap": true, 00:21:34.040 "flush": true, 00:21:34.040 "reset": true, 00:21:34.040 "nvme_admin": false, 00:21:34.040 "nvme_io": false, 00:21:34.040 "nvme_io_md": false, 00:21:34.040 "write_zeroes": true, 00:21:34.041 "zcopy": true, 00:21:34.041 "get_zone_info": false, 00:21:34.041 "zone_management": false, 00:21:34.041 "zone_append": false, 00:21:34.041 "compare": false, 00:21:34.041 "compare_and_write": false, 00:21:34.041 "abort": true, 00:21:34.041 "seek_hole": false, 00:21:34.041 "seek_data": false, 00:21:34.041 "copy": true, 00:21:34.041 "nvme_iov_md": false 00:21:34.041 }, 00:21:34.041 "memory_domains": [ 00:21:34.041 { 00:21:34.041 "dma_device_id": "system", 00:21:34.041 "dma_device_type": 1 00:21:34.041 }, 00:21:34.041 { 00:21:34.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.041 "dma_device_type": 2 00:21:34.041 } 00:21:34.041 ], 00:21:34.041 "driver_specific": {} 00:21:34.041 }' 00:21:34.041 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.041 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.041 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:34.041 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.300 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.300 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:34.300 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.300 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.300 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:34.300 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.300 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.300 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:34.300 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:34.300 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:34.300 13:21:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:34.560 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:34.560 "name": "BaseBdev4", 00:21:34.560 "aliases": [ 00:21:34.560 "3d337d06-ce47-46a4-aa8f-713ab251c4f6" 00:21:34.560 ], 00:21:34.560 "product_name": "Malloc disk", 00:21:34.560 "block_size": 512, 00:21:34.560 "num_blocks": 65536, 00:21:34.560 "uuid": "3d337d06-ce47-46a4-aa8f-713ab251c4f6", 00:21:34.560 "assigned_rate_limits": { 00:21:34.560 "rw_ios_per_sec": 0, 00:21:34.560 "rw_mbytes_per_sec": 0, 00:21:34.560 "r_mbytes_per_sec": 0, 00:21:34.560 "w_mbytes_per_sec": 0 00:21:34.560 }, 00:21:34.560 "claimed": true, 00:21:34.560 "claim_type": "exclusive_write", 00:21:34.560 "zoned": false, 00:21:34.560 "supported_io_types": { 00:21:34.560 "read": true, 00:21:34.560 "write": true, 00:21:34.560 "unmap": true, 00:21:34.560 "flush": true, 00:21:34.560 "reset": true, 00:21:34.560 "nvme_admin": false, 00:21:34.560 "nvme_io": false, 00:21:34.560 "nvme_io_md": false, 00:21:34.560 "write_zeroes": true, 00:21:34.560 "zcopy": true, 00:21:34.560 "get_zone_info": false, 00:21:34.560 "zone_management": false, 00:21:34.560 "zone_append": false, 00:21:34.560 "compare": false, 00:21:34.560 "compare_and_write": false, 00:21:34.560 "abort": true, 00:21:34.560 "seek_hole": false, 00:21:34.560 "seek_data": false, 00:21:34.560 "copy": true, 00:21:34.560 "nvme_iov_md": false 00:21:34.560 }, 00:21:34.560 "memory_domains": [ 00:21:34.560 { 00:21:34.560 "dma_device_id": "system", 00:21:34.560 "dma_device_type": 1 00:21:34.560 }, 00:21:34.560 { 00:21:34.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.560 "dma_device_type": 2 00:21:34.560 } 00:21:34.560 ], 00:21:34.560 "driver_specific": {} 00:21:34.560 }' 00:21:34.560 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.819 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.819 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:34.819 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.819 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.819 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:34.819 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.819 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.819 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:34.819 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.819 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:35.078 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:35.078 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:35.078 [2024-07-26 13:21:15.587907] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.337 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:35.337 "name": "Existed_Raid", 00:21:35.337 "uuid": "3c06706a-32fd-448b-b25f-ca40def78014", 00:21:35.337 "strip_size_kb": 0, 00:21:35.337 "state": "online", 00:21:35.337 "raid_level": "raid1", 00:21:35.337 "superblock": true, 00:21:35.337 "num_base_bdevs": 4, 00:21:35.337 "num_base_bdevs_discovered": 3, 00:21:35.337 "num_base_bdevs_operational": 3, 00:21:35.337 "base_bdevs_list": [ 00:21:35.337 { 00:21:35.337 "name": null, 00:21:35.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.337 "is_configured": false, 00:21:35.338 "data_offset": 2048, 00:21:35.338 "data_size": 63488 00:21:35.338 }, 00:21:35.338 { 00:21:35.338 "name": "BaseBdev2", 00:21:35.338 "uuid": "389697bf-8bca-4445-bac7-facc6ba598d5", 00:21:35.338 "is_configured": true, 00:21:35.338 "data_offset": 2048, 00:21:35.338 "data_size": 63488 00:21:35.338 }, 00:21:35.338 { 00:21:35.338 "name": "BaseBdev3", 00:21:35.338 "uuid": "fdd5fcc6-5a5b-40a0-b95f-fcb0b2979abb", 00:21:35.338 "is_configured": true, 00:21:35.338 "data_offset": 2048, 00:21:35.338 "data_size": 63488 00:21:35.338 }, 00:21:35.338 { 00:21:35.338 "name": "BaseBdev4", 00:21:35.338 "uuid": "3d337d06-ce47-46a4-aa8f-713ab251c4f6", 00:21:35.338 "is_configured": true, 00:21:35.338 "data_offset": 2048, 00:21:35.338 "data_size": 63488 00:21:35.338 } 00:21:35.338 ] 00:21:35.338 }' 00:21:35.338 13:21:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:35.338 13:21:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.906 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:35.906 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:35.906 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.906 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:36.165 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:36.165 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:36.165 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:36.424 [2024-07-26 13:21:16.860359] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:36.424 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:36.424 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:36.424 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.424 13:21:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:36.719 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:36.719 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:36.719 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:36.979 [2024-07-26 13:21:17.331726] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:36.979 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:36.979 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:36.979 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.979 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:37.239 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:37.239 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:37.239 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:37.498 [2024-07-26 13:21:17.798776] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:37.498 [2024-07-26 13:21:17.798855] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.498 [2024-07-26 13:21:17.809194] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.498 [2024-07-26 13:21:17.809229] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.498 [2024-07-26 13:21:17.809241] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1138840 name Existed_Raid, state offline 00:21:37.498 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:37.498 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:37.498 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.498 13:21:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:37.757 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:37.757 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:37.757 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:37.757 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:37.757 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:37.758 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:37.758 BaseBdev2 00:21:38.017 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:38.017 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:38.017 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:38.017 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:38.017 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:38.017 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:38.017 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:38.017 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:38.276 [ 00:21:38.276 { 00:21:38.276 "name": "BaseBdev2", 00:21:38.276 "aliases": [ 00:21:38.276 "af5f392e-e74d-4cc4-954e-412086fcd0b9" 00:21:38.276 ], 00:21:38.276 "product_name": "Malloc disk", 00:21:38.276 "block_size": 512, 00:21:38.276 "num_blocks": 65536, 00:21:38.276 "uuid": "af5f392e-e74d-4cc4-954e-412086fcd0b9", 00:21:38.276 "assigned_rate_limits": { 00:21:38.276 "rw_ios_per_sec": 0, 00:21:38.276 "rw_mbytes_per_sec": 0, 00:21:38.276 "r_mbytes_per_sec": 0, 00:21:38.276 "w_mbytes_per_sec": 0 00:21:38.276 }, 00:21:38.276 "claimed": false, 00:21:38.276 "zoned": false, 00:21:38.276 "supported_io_types": { 00:21:38.276 "read": true, 00:21:38.276 "write": true, 00:21:38.276 "unmap": true, 00:21:38.276 "flush": true, 00:21:38.276 "reset": true, 00:21:38.276 "nvme_admin": false, 00:21:38.276 "nvme_io": false, 00:21:38.276 "nvme_io_md": false, 00:21:38.276 "write_zeroes": true, 00:21:38.276 "zcopy": true, 00:21:38.276 "get_zone_info": false, 00:21:38.276 "zone_management": false, 00:21:38.276 "zone_append": false, 00:21:38.276 "compare": false, 00:21:38.276 "compare_and_write": false, 00:21:38.276 "abort": true, 00:21:38.276 "seek_hole": false, 00:21:38.276 "seek_data": false, 00:21:38.276 "copy": true, 00:21:38.276 "nvme_iov_md": false 00:21:38.276 }, 00:21:38.276 "memory_domains": [ 00:21:38.276 { 00:21:38.276 "dma_device_id": "system", 00:21:38.276 "dma_device_type": 1 00:21:38.276 }, 00:21:38.276 { 00:21:38.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:38.276 "dma_device_type": 2 00:21:38.276 } 00:21:38.276 ], 00:21:38.276 "driver_specific": {} 00:21:38.276 } 00:21:38.276 ] 00:21:38.276 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:38.276 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:38.276 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:38.276 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:38.535 BaseBdev3 00:21:38.535 13:21:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:38.535 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:38.535 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:38.535 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:38.535 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:38.535 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:38.535 13:21:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:38.794 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:39.053 [ 00:21:39.053 { 00:21:39.053 "name": "BaseBdev3", 00:21:39.053 "aliases": [ 00:21:39.053 "93c84400-7576-49bf-bd40-4bf1fc1c1152" 00:21:39.053 ], 00:21:39.053 "product_name": "Malloc disk", 00:21:39.053 "block_size": 512, 00:21:39.053 "num_blocks": 65536, 00:21:39.053 "uuid": "93c84400-7576-49bf-bd40-4bf1fc1c1152", 00:21:39.053 "assigned_rate_limits": { 00:21:39.053 "rw_ios_per_sec": 0, 00:21:39.053 "rw_mbytes_per_sec": 0, 00:21:39.053 "r_mbytes_per_sec": 0, 00:21:39.053 "w_mbytes_per_sec": 0 00:21:39.053 }, 00:21:39.053 "claimed": false, 00:21:39.053 "zoned": false, 00:21:39.053 "supported_io_types": { 00:21:39.053 "read": true, 00:21:39.053 "write": true, 00:21:39.053 "unmap": true, 00:21:39.053 "flush": true, 00:21:39.053 "reset": true, 00:21:39.053 "nvme_admin": false, 00:21:39.053 "nvme_io": false, 00:21:39.053 "nvme_io_md": false, 00:21:39.053 "write_zeroes": true, 00:21:39.053 "zcopy": true, 00:21:39.053 "get_zone_info": false, 00:21:39.053 "zone_management": false, 00:21:39.053 "zone_append": false, 00:21:39.053 "compare": false, 00:21:39.053 "compare_and_write": false, 00:21:39.053 "abort": true, 00:21:39.053 "seek_hole": false, 00:21:39.053 "seek_data": false, 00:21:39.053 "copy": true, 00:21:39.053 "nvme_iov_md": false 00:21:39.053 }, 00:21:39.053 "memory_domains": [ 00:21:39.053 { 00:21:39.053 "dma_device_id": "system", 00:21:39.053 "dma_device_type": 1 00:21:39.053 }, 00:21:39.053 { 00:21:39.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.053 "dma_device_type": 2 00:21:39.053 } 00:21:39.053 ], 00:21:39.053 "driver_specific": {} 00:21:39.053 } 00:21:39.053 ] 00:21:39.053 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:39.053 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:39.053 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:39.053 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:39.312 BaseBdev4 00:21:39.312 13:21:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:39.312 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:21:39.312 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:39.312 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:39.312 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:39.312 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:39.312 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:39.571 13:21:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:39.571 [ 00:21:39.571 { 00:21:39.571 "name": "BaseBdev4", 00:21:39.571 "aliases": [ 00:21:39.571 "05864a41-8f36-44d4-8452-7d2d6fdb4517" 00:21:39.571 ], 00:21:39.571 "product_name": "Malloc disk", 00:21:39.571 "block_size": 512, 00:21:39.571 "num_blocks": 65536, 00:21:39.571 "uuid": "05864a41-8f36-44d4-8452-7d2d6fdb4517", 00:21:39.571 "assigned_rate_limits": { 00:21:39.571 "rw_ios_per_sec": 0, 00:21:39.571 "rw_mbytes_per_sec": 0, 00:21:39.571 "r_mbytes_per_sec": 0, 00:21:39.571 "w_mbytes_per_sec": 0 00:21:39.571 }, 00:21:39.571 "claimed": false, 00:21:39.571 "zoned": false, 00:21:39.571 "supported_io_types": { 00:21:39.571 "read": true, 00:21:39.571 "write": true, 00:21:39.571 "unmap": true, 00:21:39.571 "flush": true, 00:21:39.571 "reset": true, 00:21:39.571 "nvme_admin": false, 00:21:39.571 "nvme_io": false, 00:21:39.571 "nvme_io_md": false, 00:21:39.571 "write_zeroes": true, 00:21:39.571 "zcopy": true, 00:21:39.571 "get_zone_info": false, 00:21:39.571 "zone_management": false, 00:21:39.571 "zone_append": false, 00:21:39.571 "compare": false, 00:21:39.571 "compare_and_write": false, 00:21:39.571 "abort": true, 00:21:39.571 "seek_hole": false, 00:21:39.571 "seek_data": false, 00:21:39.571 "copy": true, 00:21:39.571 "nvme_iov_md": false 00:21:39.571 }, 00:21:39.571 "memory_domains": [ 00:21:39.571 { 00:21:39.571 "dma_device_id": "system", 00:21:39.571 "dma_device_type": 1 00:21:39.571 }, 00:21:39.571 { 00:21:39.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.571 "dma_device_type": 2 00:21:39.571 } 00:21:39.571 ], 00:21:39.571 "driver_specific": {} 00:21:39.571 } 00:21:39.571 ] 00:21:39.571 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:39.571 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:39.571 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:39.571 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:39.830 [2024-07-26 13:21:20.284244] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:39.830 [2024-07-26 13:21:20.284280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:39.830 [2024-07-26 13:21:20.284297] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:39.830 [2024-07-26 13:21:20.285550] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:39.830 [2024-07-26 13:21:20.285590] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:39.830 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:39.830 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:39.830 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:39.830 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:39.830 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:39.830 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:39.830 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:39.830 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:39.830 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:39.830 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:39.830 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.830 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:40.088 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.088 "name": "Existed_Raid", 00:21:40.088 "uuid": "3cb7724b-770f-44aa-8464-37885e8b253e", 00:21:40.088 "strip_size_kb": 0, 00:21:40.088 "state": "configuring", 00:21:40.088 "raid_level": "raid1", 00:21:40.088 "superblock": true, 00:21:40.088 "num_base_bdevs": 4, 00:21:40.088 "num_base_bdevs_discovered": 3, 00:21:40.088 "num_base_bdevs_operational": 4, 00:21:40.089 "base_bdevs_list": [ 00:21:40.089 { 00:21:40.089 "name": "BaseBdev1", 00:21:40.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.089 "is_configured": false, 00:21:40.089 "data_offset": 0, 00:21:40.089 "data_size": 0 00:21:40.089 }, 00:21:40.089 { 00:21:40.089 "name": "BaseBdev2", 00:21:40.089 "uuid": "af5f392e-e74d-4cc4-954e-412086fcd0b9", 00:21:40.089 "is_configured": true, 00:21:40.089 "data_offset": 2048, 00:21:40.089 "data_size": 63488 00:21:40.089 }, 00:21:40.089 { 00:21:40.089 "name": "BaseBdev3", 00:21:40.089 "uuid": "93c84400-7576-49bf-bd40-4bf1fc1c1152", 00:21:40.089 "is_configured": true, 00:21:40.089 "data_offset": 2048, 00:21:40.089 "data_size": 63488 00:21:40.089 }, 00:21:40.089 { 00:21:40.089 "name": "BaseBdev4", 00:21:40.089 "uuid": "05864a41-8f36-44d4-8452-7d2d6fdb4517", 00:21:40.089 "is_configured": true, 00:21:40.089 "data_offset": 2048, 00:21:40.089 "data_size": 63488 00:21:40.089 } 00:21:40.089 ] 00:21:40.089 }' 00:21:40.089 13:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.089 13:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.657 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:40.916 [2024-07-26 13:21:21.314986] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:40.916 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:40.916 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:40.916 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:40.916 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:40.916 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:40.916 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:40.916 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.916 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.916 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.916 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.916 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.916 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.174 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.174 "name": "Existed_Raid", 00:21:41.174 "uuid": "3cb7724b-770f-44aa-8464-37885e8b253e", 00:21:41.174 "strip_size_kb": 0, 00:21:41.174 "state": "configuring", 00:21:41.174 "raid_level": "raid1", 00:21:41.174 "superblock": true, 00:21:41.174 "num_base_bdevs": 4, 00:21:41.174 "num_base_bdevs_discovered": 2, 00:21:41.174 "num_base_bdevs_operational": 4, 00:21:41.174 "base_bdevs_list": [ 00:21:41.174 { 00:21:41.174 "name": "BaseBdev1", 00:21:41.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.174 "is_configured": false, 00:21:41.174 "data_offset": 0, 00:21:41.174 "data_size": 0 00:21:41.174 }, 00:21:41.174 { 00:21:41.174 "name": null, 00:21:41.174 "uuid": "af5f392e-e74d-4cc4-954e-412086fcd0b9", 00:21:41.174 "is_configured": false, 00:21:41.174 "data_offset": 2048, 00:21:41.174 "data_size": 63488 00:21:41.174 }, 00:21:41.174 { 00:21:41.174 "name": "BaseBdev3", 00:21:41.174 "uuid": "93c84400-7576-49bf-bd40-4bf1fc1c1152", 00:21:41.174 "is_configured": true, 00:21:41.174 "data_offset": 2048, 00:21:41.174 "data_size": 63488 00:21:41.174 }, 00:21:41.174 { 00:21:41.174 "name": "BaseBdev4", 00:21:41.174 "uuid": "05864a41-8f36-44d4-8452-7d2d6fdb4517", 00:21:41.174 "is_configured": true, 00:21:41.174 "data_offset": 2048, 00:21:41.174 "data_size": 63488 00:21:41.174 } 00:21:41.174 ] 00:21:41.174 }' 00:21:41.174 13:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.174 13:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.740 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:41.740 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.999 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:41.999 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:42.259 [2024-07-26 13:21:22.537444] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:42.259 BaseBdev1 00:21:42.259 13:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:42.259 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:42.259 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:42.259 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:42.259 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:42.259 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:42.259 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:42.259 13:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:42.518 [ 00:21:42.518 { 00:21:42.518 "name": "BaseBdev1", 00:21:42.518 "aliases": [ 00:21:42.518 "80593d73-f128-4c0e-9b51-96667d3fc84f" 00:21:42.518 ], 00:21:42.518 "product_name": "Malloc disk", 00:21:42.518 "block_size": 512, 00:21:42.518 "num_blocks": 65536, 00:21:42.518 "uuid": "80593d73-f128-4c0e-9b51-96667d3fc84f", 00:21:42.518 "assigned_rate_limits": { 00:21:42.518 "rw_ios_per_sec": 0, 00:21:42.518 "rw_mbytes_per_sec": 0, 00:21:42.518 "r_mbytes_per_sec": 0, 00:21:42.518 "w_mbytes_per_sec": 0 00:21:42.518 }, 00:21:42.518 "claimed": true, 00:21:42.518 "claim_type": "exclusive_write", 00:21:42.518 "zoned": false, 00:21:42.518 "supported_io_types": { 00:21:42.518 "read": true, 00:21:42.518 "write": true, 00:21:42.518 "unmap": true, 00:21:42.518 "flush": true, 00:21:42.518 "reset": true, 00:21:42.518 "nvme_admin": false, 00:21:42.518 "nvme_io": false, 00:21:42.518 "nvme_io_md": false, 00:21:42.518 "write_zeroes": true, 00:21:42.519 "zcopy": true, 00:21:42.519 "get_zone_info": false, 00:21:42.519 "zone_management": false, 00:21:42.519 "zone_append": false, 00:21:42.519 "compare": false, 00:21:42.519 "compare_and_write": false, 00:21:42.519 "abort": true, 00:21:42.519 "seek_hole": false, 00:21:42.519 "seek_data": false, 00:21:42.519 "copy": true, 00:21:42.519 "nvme_iov_md": false 00:21:42.519 }, 00:21:42.519 "memory_domains": [ 00:21:42.519 { 00:21:42.519 "dma_device_id": "system", 00:21:42.519 "dma_device_type": 1 00:21:42.519 }, 00:21:42.519 { 00:21:42.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.519 "dma_device_type": 2 00:21:42.519 } 00:21:42.519 ], 00:21:42.519 "driver_specific": {} 00:21:42.519 } 00:21:42.519 ] 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.519 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.778 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:42.778 "name": "Existed_Raid", 00:21:42.778 "uuid": "3cb7724b-770f-44aa-8464-37885e8b253e", 00:21:42.778 "strip_size_kb": 0, 00:21:42.778 "state": "configuring", 00:21:42.778 "raid_level": "raid1", 00:21:42.778 "superblock": true, 00:21:42.778 "num_base_bdevs": 4, 00:21:42.778 "num_base_bdevs_discovered": 3, 00:21:42.778 "num_base_bdevs_operational": 4, 00:21:42.778 "base_bdevs_list": [ 00:21:42.778 { 00:21:42.778 "name": "BaseBdev1", 00:21:42.778 "uuid": "80593d73-f128-4c0e-9b51-96667d3fc84f", 00:21:42.778 "is_configured": true, 00:21:42.778 "data_offset": 2048, 00:21:42.778 "data_size": 63488 00:21:42.778 }, 00:21:42.778 { 00:21:42.778 "name": null, 00:21:42.778 "uuid": "af5f392e-e74d-4cc4-954e-412086fcd0b9", 00:21:42.778 "is_configured": false, 00:21:42.778 "data_offset": 2048, 00:21:42.778 "data_size": 63488 00:21:42.778 }, 00:21:42.778 { 00:21:42.778 "name": "BaseBdev3", 00:21:42.778 "uuid": "93c84400-7576-49bf-bd40-4bf1fc1c1152", 00:21:42.778 "is_configured": true, 00:21:42.778 "data_offset": 2048, 00:21:42.778 "data_size": 63488 00:21:42.778 }, 00:21:42.778 { 00:21:42.778 "name": "BaseBdev4", 00:21:42.778 "uuid": "05864a41-8f36-44d4-8452-7d2d6fdb4517", 00:21:42.778 "is_configured": true, 00:21:42.778 "data_offset": 2048, 00:21:42.778 "data_size": 63488 00:21:42.778 } 00:21:42.778 ] 00:21:42.778 }' 00:21:42.778 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:42.778 13:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.346 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.346 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:43.605 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:43.605 13:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:43.605 [2024-07-26 13:21:24.097576] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:43.605 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:43.605 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:43.605 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:43.605 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:43.605 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:43.605 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:43.605 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:43.605 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:43.605 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:43.605 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:43.864 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.864 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.864 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:43.864 "name": "Existed_Raid", 00:21:43.864 "uuid": "3cb7724b-770f-44aa-8464-37885e8b253e", 00:21:43.864 "strip_size_kb": 0, 00:21:43.864 "state": "configuring", 00:21:43.864 "raid_level": "raid1", 00:21:43.864 "superblock": true, 00:21:43.864 "num_base_bdevs": 4, 00:21:43.865 "num_base_bdevs_discovered": 2, 00:21:43.865 "num_base_bdevs_operational": 4, 00:21:43.865 "base_bdevs_list": [ 00:21:43.865 { 00:21:43.865 "name": "BaseBdev1", 00:21:43.865 "uuid": "80593d73-f128-4c0e-9b51-96667d3fc84f", 00:21:43.865 "is_configured": true, 00:21:43.865 "data_offset": 2048, 00:21:43.865 "data_size": 63488 00:21:43.865 }, 00:21:43.865 { 00:21:43.865 "name": null, 00:21:43.865 "uuid": "af5f392e-e74d-4cc4-954e-412086fcd0b9", 00:21:43.865 "is_configured": false, 00:21:43.865 "data_offset": 2048, 00:21:43.865 "data_size": 63488 00:21:43.865 }, 00:21:43.865 { 00:21:43.865 "name": null, 00:21:43.865 "uuid": "93c84400-7576-49bf-bd40-4bf1fc1c1152", 00:21:43.865 "is_configured": false, 00:21:43.865 "data_offset": 2048, 00:21:43.865 "data_size": 63488 00:21:43.865 }, 00:21:43.865 { 00:21:43.865 "name": "BaseBdev4", 00:21:43.865 "uuid": "05864a41-8f36-44d4-8452-7d2d6fdb4517", 00:21:43.865 "is_configured": true, 00:21:43.865 "data_offset": 2048, 00:21:43.865 "data_size": 63488 00:21:43.865 } 00:21:43.865 ] 00:21:43.865 }' 00:21:43.865 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:43.865 13:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.432 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.432 13:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:44.692 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:44.692 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:44.951 [2024-07-26 13:21:25.288837] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:44.951 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:44.951 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.951 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.951 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:44.951 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:44.951 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:44.951 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.951 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.951 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.951 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.951 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.951 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.210 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.211 "name": "Existed_Raid", 00:21:45.211 "uuid": "3cb7724b-770f-44aa-8464-37885e8b253e", 00:21:45.211 "strip_size_kb": 0, 00:21:45.211 "state": "configuring", 00:21:45.211 "raid_level": "raid1", 00:21:45.211 "superblock": true, 00:21:45.211 "num_base_bdevs": 4, 00:21:45.211 "num_base_bdevs_discovered": 3, 00:21:45.211 "num_base_bdevs_operational": 4, 00:21:45.211 "base_bdevs_list": [ 00:21:45.211 { 00:21:45.211 "name": "BaseBdev1", 00:21:45.211 "uuid": "80593d73-f128-4c0e-9b51-96667d3fc84f", 00:21:45.211 "is_configured": true, 00:21:45.211 "data_offset": 2048, 00:21:45.211 "data_size": 63488 00:21:45.211 }, 00:21:45.211 { 00:21:45.211 "name": null, 00:21:45.211 "uuid": "af5f392e-e74d-4cc4-954e-412086fcd0b9", 00:21:45.211 "is_configured": false, 00:21:45.211 "data_offset": 2048, 00:21:45.211 "data_size": 63488 00:21:45.211 }, 00:21:45.211 { 00:21:45.211 "name": "BaseBdev3", 00:21:45.211 "uuid": "93c84400-7576-49bf-bd40-4bf1fc1c1152", 00:21:45.211 "is_configured": true, 00:21:45.211 "data_offset": 2048, 00:21:45.211 "data_size": 63488 00:21:45.211 }, 00:21:45.211 { 00:21:45.211 "name": "BaseBdev4", 00:21:45.211 "uuid": "05864a41-8f36-44d4-8452-7d2d6fdb4517", 00:21:45.211 "is_configured": true, 00:21:45.211 "data_offset": 2048, 00:21:45.211 "data_size": 63488 00:21:45.211 } 00:21:45.211 ] 00:21:45.211 }' 00:21:45.211 13:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.211 13:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.779 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:45.779 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.038 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:46.039 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:46.039 [2024-07-26 13:21:26.552188] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.298 "name": "Existed_Raid", 00:21:46.298 "uuid": "3cb7724b-770f-44aa-8464-37885e8b253e", 00:21:46.298 "strip_size_kb": 0, 00:21:46.298 "state": "configuring", 00:21:46.298 "raid_level": "raid1", 00:21:46.298 "superblock": true, 00:21:46.298 "num_base_bdevs": 4, 00:21:46.298 "num_base_bdevs_discovered": 2, 00:21:46.298 "num_base_bdevs_operational": 4, 00:21:46.298 "base_bdevs_list": [ 00:21:46.298 { 00:21:46.298 "name": null, 00:21:46.298 "uuid": "80593d73-f128-4c0e-9b51-96667d3fc84f", 00:21:46.298 "is_configured": false, 00:21:46.298 "data_offset": 2048, 00:21:46.298 "data_size": 63488 00:21:46.298 }, 00:21:46.298 { 00:21:46.298 "name": null, 00:21:46.298 "uuid": "af5f392e-e74d-4cc4-954e-412086fcd0b9", 00:21:46.298 "is_configured": false, 00:21:46.298 "data_offset": 2048, 00:21:46.298 "data_size": 63488 00:21:46.298 }, 00:21:46.298 { 00:21:46.298 "name": "BaseBdev3", 00:21:46.298 "uuid": "93c84400-7576-49bf-bd40-4bf1fc1c1152", 00:21:46.298 "is_configured": true, 00:21:46.298 "data_offset": 2048, 00:21:46.298 "data_size": 63488 00:21:46.298 }, 00:21:46.298 { 00:21:46.298 "name": "BaseBdev4", 00:21:46.298 "uuid": "05864a41-8f36-44d4-8452-7d2d6fdb4517", 00:21:46.298 "is_configured": true, 00:21:46.298 "data_offset": 2048, 00:21:46.298 "data_size": 63488 00:21:46.298 } 00:21:46.298 ] 00:21:46.298 }' 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.298 13:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.866 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.866 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:47.125 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:47.125 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:47.385 [2024-07-26 13:21:27.749194] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:47.385 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:21:47.385 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:47.385 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.385 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:47.385 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:47.385 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:47.385 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.385 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.385 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.385 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.385 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.385 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.644 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.644 "name": "Existed_Raid", 00:21:47.644 "uuid": "3cb7724b-770f-44aa-8464-37885e8b253e", 00:21:47.644 "strip_size_kb": 0, 00:21:47.644 "state": "configuring", 00:21:47.644 "raid_level": "raid1", 00:21:47.644 "superblock": true, 00:21:47.644 "num_base_bdevs": 4, 00:21:47.644 "num_base_bdevs_discovered": 3, 00:21:47.644 "num_base_bdevs_operational": 4, 00:21:47.644 "base_bdevs_list": [ 00:21:47.644 { 00:21:47.644 "name": null, 00:21:47.644 "uuid": "80593d73-f128-4c0e-9b51-96667d3fc84f", 00:21:47.644 "is_configured": false, 00:21:47.644 "data_offset": 2048, 00:21:47.644 "data_size": 63488 00:21:47.644 }, 00:21:47.644 { 00:21:47.644 "name": "BaseBdev2", 00:21:47.644 "uuid": "af5f392e-e74d-4cc4-954e-412086fcd0b9", 00:21:47.644 "is_configured": true, 00:21:47.644 "data_offset": 2048, 00:21:47.644 "data_size": 63488 00:21:47.644 }, 00:21:47.644 { 00:21:47.644 "name": "BaseBdev3", 00:21:47.644 "uuid": "93c84400-7576-49bf-bd40-4bf1fc1c1152", 00:21:47.644 "is_configured": true, 00:21:47.644 "data_offset": 2048, 00:21:47.644 "data_size": 63488 00:21:47.644 }, 00:21:47.644 { 00:21:47.644 "name": "BaseBdev4", 00:21:47.644 "uuid": "05864a41-8f36-44d4-8452-7d2d6fdb4517", 00:21:47.644 "is_configured": true, 00:21:47.644 "data_offset": 2048, 00:21:47.644 "data_size": 63488 00:21:47.644 } 00:21:47.644 ] 00:21:47.644 }' 00:21:47.644 13:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.644 13:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.211 13:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.211 13:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:48.211 13:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:48.212 13:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.212 13:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:48.470 13:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 80593d73-f128-4c0e-9b51-96667d3fc84f 00:21:48.729 [2024-07-26 13:21:29.148103] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:48.729 [2024-07-26 13:21:29.148258] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x11372c0 00:21:48.729 [2024-07-26 13:21:29.148271] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:48.729 [2024-07-26 13:21:29.148425] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12dc030 00:21:48.729 [2024-07-26 13:21:29.148538] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11372c0 00:21:48.730 [2024-07-26 13:21:29.148547] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x11372c0 00:21:48.730 [2024-07-26 13:21:29.148634] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.730 NewBaseBdev 00:21:48.730 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:48.730 13:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:21:48.730 13:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:48.730 13:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:48.730 13:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:48.730 13:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:48.730 13:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:48.989 13:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:49.248 [ 00:21:49.248 { 00:21:49.248 "name": "NewBaseBdev", 00:21:49.248 "aliases": [ 00:21:49.248 "80593d73-f128-4c0e-9b51-96667d3fc84f" 00:21:49.248 ], 00:21:49.248 "product_name": "Malloc disk", 00:21:49.248 "block_size": 512, 00:21:49.248 "num_blocks": 65536, 00:21:49.248 "uuid": "80593d73-f128-4c0e-9b51-96667d3fc84f", 00:21:49.248 "assigned_rate_limits": { 00:21:49.248 "rw_ios_per_sec": 0, 00:21:49.248 "rw_mbytes_per_sec": 0, 00:21:49.248 "r_mbytes_per_sec": 0, 00:21:49.248 "w_mbytes_per_sec": 0 00:21:49.248 }, 00:21:49.248 "claimed": true, 00:21:49.248 "claim_type": "exclusive_write", 00:21:49.248 "zoned": false, 00:21:49.248 "supported_io_types": { 00:21:49.248 "read": true, 00:21:49.248 "write": true, 00:21:49.248 "unmap": true, 00:21:49.248 "flush": true, 00:21:49.248 "reset": true, 00:21:49.248 "nvme_admin": false, 00:21:49.248 "nvme_io": false, 00:21:49.248 "nvme_io_md": false, 00:21:49.248 "write_zeroes": true, 00:21:49.248 "zcopy": true, 00:21:49.248 "get_zone_info": false, 00:21:49.248 "zone_management": false, 00:21:49.248 "zone_append": false, 00:21:49.248 "compare": false, 00:21:49.248 "compare_and_write": false, 00:21:49.248 "abort": true, 00:21:49.248 "seek_hole": false, 00:21:49.248 "seek_data": false, 00:21:49.249 "copy": true, 00:21:49.249 "nvme_iov_md": false 00:21:49.249 }, 00:21:49.249 "memory_domains": [ 00:21:49.249 { 00:21:49.249 "dma_device_id": "system", 00:21:49.249 "dma_device_type": 1 00:21:49.249 }, 00:21:49.249 { 00:21:49.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.249 "dma_device_type": 2 00:21:49.249 } 00:21:49.249 ], 00:21:49.249 "driver_specific": {} 00:21:49.249 } 00:21:49.249 ] 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.249 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.509 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.509 "name": "Existed_Raid", 00:21:49.509 "uuid": "3cb7724b-770f-44aa-8464-37885e8b253e", 00:21:49.509 "strip_size_kb": 0, 00:21:49.509 "state": "online", 00:21:49.509 "raid_level": "raid1", 00:21:49.509 "superblock": true, 00:21:49.509 "num_base_bdevs": 4, 00:21:49.509 "num_base_bdevs_discovered": 4, 00:21:49.509 "num_base_bdevs_operational": 4, 00:21:49.509 "base_bdevs_list": [ 00:21:49.509 { 00:21:49.509 "name": "NewBaseBdev", 00:21:49.509 "uuid": "80593d73-f128-4c0e-9b51-96667d3fc84f", 00:21:49.509 "is_configured": true, 00:21:49.509 "data_offset": 2048, 00:21:49.509 "data_size": 63488 00:21:49.509 }, 00:21:49.509 { 00:21:49.509 "name": "BaseBdev2", 00:21:49.509 "uuid": "af5f392e-e74d-4cc4-954e-412086fcd0b9", 00:21:49.509 "is_configured": true, 00:21:49.509 "data_offset": 2048, 00:21:49.509 "data_size": 63488 00:21:49.509 }, 00:21:49.509 { 00:21:49.509 "name": "BaseBdev3", 00:21:49.509 "uuid": "93c84400-7576-49bf-bd40-4bf1fc1c1152", 00:21:49.509 "is_configured": true, 00:21:49.509 "data_offset": 2048, 00:21:49.509 "data_size": 63488 00:21:49.509 }, 00:21:49.509 { 00:21:49.509 "name": "BaseBdev4", 00:21:49.509 "uuid": "05864a41-8f36-44d4-8452-7d2d6fdb4517", 00:21:49.509 "is_configured": true, 00:21:49.509 "data_offset": 2048, 00:21:49.509 "data_size": 63488 00:21:49.509 } 00:21:49.509 ] 00:21:49.509 }' 00:21:49.509 13:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.509 13:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.147 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:50.147 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:50.147 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:50.147 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:50.147 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:50.147 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:50.147 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:50.147 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:50.147 [2024-07-26 13:21:30.616285] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.147 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:50.147 "name": "Existed_Raid", 00:21:50.147 "aliases": [ 00:21:50.147 "3cb7724b-770f-44aa-8464-37885e8b253e" 00:21:50.147 ], 00:21:50.147 "product_name": "Raid Volume", 00:21:50.147 "block_size": 512, 00:21:50.147 "num_blocks": 63488, 00:21:50.147 "uuid": "3cb7724b-770f-44aa-8464-37885e8b253e", 00:21:50.147 "assigned_rate_limits": { 00:21:50.147 "rw_ios_per_sec": 0, 00:21:50.147 "rw_mbytes_per_sec": 0, 00:21:50.147 "r_mbytes_per_sec": 0, 00:21:50.147 "w_mbytes_per_sec": 0 00:21:50.147 }, 00:21:50.147 "claimed": false, 00:21:50.147 "zoned": false, 00:21:50.147 "supported_io_types": { 00:21:50.147 "read": true, 00:21:50.147 "write": true, 00:21:50.147 "unmap": false, 00:21:50.147 "flush": false, 00:21:50.147 "reset": true, 00:21:50.147 "nvme_admin": false, 00:21:50.147 "nvme_io": false, 00:21:50.147 "nvme_io_md": false, 00:21:50.147 "write_zeroes": true, 00:21:50.147 "zcopy": false, 00:21:50.147 "get_zone_info": false, 00:21:50.147 "zone_management": false, 00:21:50.147 "zone_append": false, 00:21:50.147 "compare": false, 00:21:50.147 "compare_and_write": false, 00:21:50.147 "abort": false, 00:21:50.147 "seek_hole": false, 00:21:50.147 "seek_data": false, 00:21:50.147 "copy": false, 00:21:50.147 "nvme_iov_md": false 00:21:50.147 }, 00:21:50.147 "memory_domains": [ 00:21:50.147 { 00:21:50.147 "dma_device_id": "system", 00:21:50.147 "dma_device_type": 1 00:21:50.147 }, 00:21:50.147 { 00:21:50.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.147 "dma_device_type": 2 00:21:50.147 }, 00:21:50.147 { 00:21:50.147 "dma_device_id": "system", 00:21:50.147 "dma_device_type": 1 00:21:50.147 }, 00:21:50.147 { 00:21:50.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.147 "dma_device_type": 2 00:21:50.147 }, 00:21:50.147 { 00:21:50.147 "dma_device_id": "system", 00:21:50.147 "dma_device_type": 1 00:21:50.147 }, 00:21:50.147 { 00:21:50.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.147 "dma_device_type": 2 00:21:50.147 }, 00:21:50.147 { 00:21:50.147 "dma_device_id": "system", 00:21:50.147 "dma_device_type": 1 00:21:50.147 }, 00:21:50.147 { 00:21:50.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.147 "dma_device_type": 2 00:21:50.147 } 00:21:50.147 ], 00:21:50.147 "driver_specific": { 00:21:50.147 "raid": { 00:21:50.147 "uuid": "3cb7724b-770f-44aa-8464-37885e8b253e", 00:21:50.147 "strip_size_kb": 0, 00:21:50.147 "state": "online", 00:21:50.147 "raid_level": "raid1", 00:21:50.147 "superblock": true, 00:21:50.147 "num_base_bdevs": 4, 00:21:50.147 "num_base_bdevs_discovered": 4, 00:21:50.147 "num_base_bdevs_operational": 4, 00:21:50.147 "base_bdevs_list": [ 00:21:50.147 { 00:21:50.147 "name": "NewBaseBdev", 00:21:50.147 "uuid": "80593d73-f128-4c0e-9b51-96667d3fc84f", 00:21:50.147 "is_configured": true, 00:21:50.147 "data_offset": 2048, 00:21:50.147 "data_size": 63488 00:21:50.147 }, 00:21:50.147 { 00:21:50.147 "name": "BaseBdev2", 00:21:50.147 "uuid": "af5f392e-e74d-4cc4-954e-412086fcd0b9", 00:21:50.147 "is_configured": true, 00:21:50.147 "data_offset": 2048, 00:21:50.147 "data_size": 63488 00:21:50.147 }, 00:21:50.147 { 00:21:50.147 "name": "BaseBdev3", 00:21:50.147 "uuid": "93c84400-7576-49bf-bd40-4bf1fc1c1152", 00:21:50.147 "is_configured": true, 00:21:50.147 "data_offset": 2048, 00:21:50.147 "data_size": 63488 00:21:50.147 }, 00:21:50.147 { 00:21:50.147 "name": "BaseBdev4", 00:21:50.147 "uuid": "05864a41-8f36-44d4-8452-7d2d6fdb4517", 00:21:50.147 "is_configured": true, 00:21:50.147 "data_offset": 2048, 00:21:50.147 "data_size": 63488 00:21:50.147 } 00:21:50.147 ] 00:21:50.147 } 00:21:50.147 } 00:21:50.147 }' 00:21:50.147 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:50.407 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:50.407 BaseBdev2 00:21:50.407 BaseBdev3 00:21:50.407 BaseBdev4' 00:21:50.407 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:50.407 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:50.407 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:50.407 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:50.407 "name": "NewBaseBdev", 00:21:50.407 "aliases": [ 00:21:50.407 "80593d73-f128-4c0e-9b51-96667d3fc84f" 00:21:50.407 ], 00:21:50.407 "product_name": "Malloc disk", 00:21:50.407 "block_size": 512, 00:21:50.407 "num_blocks": 65536, 00:21:50.407 "uuid": "80593d73-f128-4c0e-9b51-96667d3fc84f", 00:21:50.407 "assigned_rate_limits": { 00:21:50.407 "rw_ios_per_sec": 0, 00:21:50.407 "rw_mbytes_per_sec": 0, 00:21:50.407 "r_mbytes_per_sec": 0, 00:21:50.407 "w_mbytes_per_sec": 0 00:21:50.407 }, 00:21:50.407 "claimed": true, 00:21:50.407 "claim_type": "exclusive_write", 00:21:50.407 "zoned": false, 00:21:50.407 "supported_io_types": { 00:21:50.407 "read": true, 00:21:50.407 "write": true, 00:21:50.407 "unmap": true, 00:21:50.407 "flush": true, 00:21:50.407 "reset": true, 00:21:50.407 "nvme_admin": false, 00:21:50.407 "nvme_io": false, 00:21:50.407 "nvme_io_md": false, 00:21:50.407 "write_zeroes": true, 00:21:50.407 "zcopy": true, 00:21:50.407 "get_zone_info": false, 00:21:50.407 "zone_management": false, 00:21:50.407 "zone_append": false, 00:21:50.407 "compare": false, 00:21:50.407 "compare_and_write": false, 00:21:50.407 "abort": true, 00:21:50.407 "seek_hole": false, 00:21:50.407 "seek_data": false, 00:21:50.407 "copy": true, 00:21:50.407 "nvme_iov_md": false 00:21:50.407 }, 00:21:50.407 "memory_domains": [ 00:21:50.407 { 00:21:50.407 "dma_device_id": "system", 00:21:50.407 "dma_device_type": 1 00:21:50.407 }, 00:21:50.407 { 00:21:50.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:50.407 "dma_device_type": 2 00:21:50.407 } 00:21:50.407 ], 00:21:50.407 "driver_specific": {} 00:21:50.407 }' 00:21:50.407 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:50.666 13:21:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:50.666 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:50.666 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:50.666 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:50.666 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:50.666 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:50.666 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:50.666 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:50.666 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:50.925 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:50.925 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:50.925 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:50.925 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:50.925 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:51.184 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:51.184 "name": "BaseBdev2", 00:21:51.184 "aliases": [ 00:21:51.184 "af5f392e-e74d-4cc4-954e-412086fcd0b9" 00:21:51.184 ], 00:21:51.184 "product_name": "Malloc disk", 00:21:51.184 "block_size": 512, 00:21:51.184 "num_blocks": 65536, 00:21:51.184 "uuid": "af5f392e-e74d-4cc4-954e-412086fcd0b9", 00:21:51.184 "assigned_rate_limits": { 00:21:51.184 "rw_ios_per_sec": 0, 00:21:51.184 "rw_mbytes_per_sec": 0, 00:21:51.184 "r_mbytes_per_sec": 0, 00:21:51.184 "w_mbytes_per_sec": 0 00:21:51.184 }, 00:21:51.184 "claimed": true, 00:21:51.185 "claim_type": "exclusive_write", 00:21:51.185 "zoned": false, 00:21:51.185 "supported_io_types": { 00:21:51.185 "read": true, 00:21:51.185 "write": true, 00:21:51.185 "unmap": true, 00:21:51.185 "flush": true, 00:21:51.185 "reset": true, 00:21:51.185 "nvme_admin": false, 00:21:51.185 "nvme_io": false, 00:21:51.185 "nvme_io_md": false, 00:21:51.185 "write_zeroes": true, 00:21:51.185 "zcopy": true, 00:21:51.185 "get_zone_info": false, 00:21:51.185 "zone_management": false, 00:21:51.185 "zone_append": false, 00:21:51.185 "compare": false, 00:21:51.185 "compare_and_write": false, 00:21:51.185 "abort": true, 00:21:51.185 "seek_hole": false, 00:21:51.185 "seek_data": false, 00:21:51.185 "copy": true, 00:21:51.185 "nvme_iov_md": false 00:21:51.185 }, 00:21:51.185 "memory_domains": [ 00:21:51.185 { 00:21:51.185 "dma_device_id": "system", 00:21:51.185 "dma_device_type": 1 00:21:51.185 }, 00:21:51.185 { 00:21:51.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.185 "dma_device_type": 2 00:21:51.185 } 00:21:51.185 ], 00:21:51.185 "driver_specific": {} 00:21:51.185 }' 00:21:51.185 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.185 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.185 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:51.185 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.185 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.185 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:51.185 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.444 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.444 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.444 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.444 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.444 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.444 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:51.444 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:51.444 13:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:51.703 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:51.703 "name": "BaseBdev3", 00:21:51.703 "aliases": [ 00:21:51.703 "93c84400-7576-49bf-bd40-4bf1fc1c1152" 00:21:51.703 ], 00:21:51.703 "product_name": "Malloc disk", 00:21:51.703 "block_size": 512, 00:21:51.703 "num_blocks": 65536, 00:21:51.703 "uuid": "93c84400-7576-49bf-bd40-4bf1fc1c1152", 00:21:51.703 "assigned_rate_limits": { 00:21:51.703 "rw_ios_per_sec": 0, 00:21:51.703 "rw_mbytes_per_sec": 0, 00:21:51.703 "r_mbytes_per_sec": 0, 00:21:51.703 "w_mbytes_per_sec": 0 00:21:51.703 }, 00:21:51.703 "claimed": true, 00:21:51.703 "claim_type": "exclusive_write", 00:21:51.703 "zoned": false, 00:21:51.703 "supported_io_types": { 00:21:51.703 "read": true, 00:21:51.703 "write": true, 00:21:51.703 "unmap": true, 00:21:51.703 "flush": true, 00:21:51.703 "reset": true, 00:21:51.703 "nvme_admin": false, 00:21:51.703 "nvme_io": false, 00:21:51.703 "nvme_io_md": false, 00:21:51.703 "write_zeroes": true, 00:21:51.703 "zcopy": true, 00:21:51.703 "get_zone_info": false, 00:21:51.703 "zone_management": false, 00:21:51.703 "zone_append": false, 00:21:51.703 "compare": false, 00:21:51.703 "compare_and_write": false, 00:21:51.703 "abort": true, 00:21:51.703 "seek_hole": false, 00:21:51.703 "seek_data": false, 00:21:51.703 "copy": true, 00:21:51.703 "nvme_iov_md": false 00:21:51.703 }, 00:21:51.703 "memory_domains": [ 00:21:51.703 { 00:21:51.703 "dma_device_id": "system", 00:21:51.703 "dma_device_type": 1 00:21:51.703 }, 00:21:51.703 { 00:21:51.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.703 "dma_device_type": 2 00:21:51.703 } 00:21:51.703 ], 00:21:51.703 "driver_specific": {} 00:21:51.703 }' 00:21:51.703 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.703 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:51.703 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:51.703 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.703 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:51.962 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:51.962 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.962 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:51.962 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:51.962 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.962 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:51.962 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:51.962 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:51.962 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:51.962 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:52.221 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:52.221 "name": "BaseBdev4", 00:21:52.221 "aliases": [ 00:21:52.221 "05864a41-8f36-44d4-8452-7d2d6fdb4517" 00:21:52.221 ], 00:21:52.221 "product_name": "Malloc disk", 00:21:52.221 "block_size": 512, 00:21:52.221 "num_blocks": 65536, 00:21:52.221 "uuid": "05864a41-8f36-44d4-8452-7d2d6fdb4517", 00:21:52.221 "assigned_rate_limits": { 00:21:52.221 "rw_ios_per_sec": 0, 00:21:52.221 "rw_mbytes_per_sec": 0, 00:21:52.221 "r_mbytes_per_sec": 0, 00:21:52.221 "w_mbytes_per_sec": 0 00:21:52.221 }, 00:21:52.221 "claimed": true, 00:21:52.221 "claim_type": "exclusive_write", 00:21:52.221 "zoned": false, 00:21:52.221 "supported_io_types": { 00:21:52.221 "read": true, 00:21:52.221 "write": true, 00:21:52.221 "unmap": true, 00:21:52.221 "flush": true, 00:21:52.221 "reset": true, 00:21:52.221 "nvme_admin": false, 00:21:52.221 "nvme_io": false, 00:21:52.221 "nvme_io_md": false, 00:21:52.221 "write_zeroes": true, 00:21:52.221 "zcopy": true, 00:21:52.221 "get_zone_info": false, 00:21:52.221 "zone_management": false, 00:21:52.221 "zone_append": false, 00:21:52.221 "compare": false, 00:21:52.221 "compare_and_write": false, 00:21:52.221 "abort": true, 00:21:52.221 "seek_hole": false, 00:21:52.221 "seek_data": false, 00:21:52.221 "copy": true, 00:21:52.221 "nvme_iov_md": false 00:21:52.221 }, 00:21:52.221 "memory_domains": [ 00:21:52.221 { 00:21:52.221 "dma_device_id": "system", 00:21:52.221 "dma_device_type": 1 00:21:52.221 }, 00:21:52.221 { 00:21:52.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.221 "dma_device_type": 2 00:21:52.221 } 00:21:52.221 ], 00:21:52.221 "driver_specific": {} 00:21:52.221 }' 00:21:52.221 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.221 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.221 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:52.221 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.480 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.481 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:52.481 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.481 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:52.481 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:52.481 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.481 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:52.481 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:52.481 13:21:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:52.740 [2024-07-26 13:21:33.194978] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:52.740 [2024-07-26 13:21:33.195005] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.740 [2024-07-26 13:21:33.195049] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.740 [2024-07-26 13:21:33.195316] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.740 [2024-07-26 13:21:33.195329] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11372c0 name Existed_Raid, state offline 00:21:52.740 13:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 768001 00:21:52.740 13:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 768001 ']' 00:21:52.740 13:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 768001 00:21:52.740 13:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:21:52.740 13:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:52.740 13:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 768001 00:21:52.740 13:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:52.740 13:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:52.740 13:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 768001' 00:21:52.740 killing process with pid 768001 00:21:52.740 13:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 768001 00:21:52.740 [2024-07-26 13:21:33.260585] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:52.740 13:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 768001 00:21:52.998 [2024-07-26 13:21:33.292917] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:52.998 13:21:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:52.998 00:21:52.998 real 0m30.289s 00:21:52.998 user 0m55.505s 00:21:52.998 sys 0m5.560s 00:21:52.998 13:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:52.998 13:21:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:52.998 ************************************ 00:21:52.998 END TEST raid_state_function_test_sb 00:21:52.998 ************************************ 00:21:52.998 13:21:33 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:21:52.998 13:21:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:52.998 13:21:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:52.998 13:21:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:53.258 ************************************ 00:21:53.258 START TEST raid_superblock_test 00:21:53.258 ************************************ 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=774115 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 774115 /var/tmp/spdk-raid.sock 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 774115 ']' 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:53.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.258 13:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.258 [2024-07-26 13:21:33.600449] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:21:53.258 [2024-07-26 13:21:33.600490] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid774115 ] 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:53.258 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:53.258 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:53.258 [2024-07-26 13:21:33.718019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.518 [2024-07-26 13:21:33.807454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.518 [2024-07-26 13:21:33.860344] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.518 [2024-07-26 13:21:33.860369] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.777 13:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.777 13:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:21:53.777 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:21:53.777 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:53.777 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:21:53.777 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:21:53.777 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:53.777 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:53.777 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:53.777 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:53.777 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:54.037 malloc1 00:21:54.037 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:54.037 [2024-07-26 13:21:34.523870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:54.037 [2024-07-26 13:21:34.523913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.037 [2024-07-26 13:21:34.523930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc932f0 00:21:54.037 [2024-07-26 13:21:34.523942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.037 [2024-07-26 13:21:34.525429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.037 [2024-07-26 13:21:34.525457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:54.037 pt1 00:21:54.037 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:54.037 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:54.037 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:21:54.037 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:21:54.037 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:54.037 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:54.037 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:54.037 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:54.037 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:54.296 malloc2 00:21:54.296 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:54.555 [2024-07-26 13:21:34.961405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:54.555 [2024-07-26 13:21:34.961445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.555 [2024-07-26 13:21:34.961460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc946d0 00:21:54.555 [2024-07-26 13:21:34.961472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.555 [2024-07-26 13:21:34.962869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.555 [2024-07-26 13:21:34.962895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:54.555 pt2 00:21:54.555 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:54.555 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:54.556 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:21:54.556 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:21:54.556 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:54.556 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:54.556 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:54.556 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:54.556 13:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:54.815 malloc3 00:21:54.815 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:55.074 [2024-07-26 13:21:35.418791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:55.074 [2024-07-26 13:21:35.418830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.074 [2024-07-26 13:21:35.418845] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe2d6b0 00:21:55.074 [2024-07-26 13:21:35.418857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.074 [2024-07-26 13:21:35.420207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.074 [2024-07-26 13:21:35.420233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:55.074 pt3 00:21:55.074 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:55.074 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:55.074 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:21:55.074 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:21:55.074 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:21:55.074 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:55.074 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:55.074 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:55.074 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:21:55.334 malloc4 00:21:55.334 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:21:55.594 [2024-07-26 13:21:35.880384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:21:55.594 [2024-07-26 13:21:35.880427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.594 [2024-07-26 13:21:35.880442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe2b370 00:21:55.594 [2024-07-26 13:21:35.880454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.594 [2024-07-26 13:21:35.881807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.594 [2024-07-26 13:21:35.881836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:21:55.594 pt4 00:21:55.594 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:55.594 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:55.594 13:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:21:55.594 [2024-07-26 13:21:36.105000] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:55.594 [2024-07-26 13:21:36.106172] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:55.594 [2024-07-26 13:21:36.106225] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:55.594 [2024-07-26 13:21:36.106268] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:21:55.594 [2024-07-26 13:21:36.106415] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc8c560 00:21:55.594 [2024-07-26 13:21:36.106427] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:55.594 [2024-07-26 13:21:36.106608] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc90650 00:21:55.594 [2024-07-26 13:21:36.106740] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc8c560 00:21:55.594 [2024-07-26 13:21:36.106749] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc8c560 00:21:55.594 [2024-07-26 13:21:36.106853] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.853 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:55.853 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:55.853 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:55.853 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:55.853 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:55.853 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:55.853 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:55.853 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:55.853 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:55.854 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:55.854 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.854 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.854 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:55.854 "name": "raid_bdev1", 00:21:55.854 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:21:55.854 "strip_size_kb": 0, 00:21:55.854 "state": "online", 00:21:55.854 "raid_level": "raid1", 00:21:55.854 "superblock": true, 00:21:55.854 "num_base_bdevs": 4, 00:21:55.854 "num_base_bdevs_discovered": 4, 00:21:55.854 "num_base_bdevs_operational": 4, 00:21:55.854 "base_bdevs_list": [ 00:21:55.854 { 00:21:55.854 "name": "pt1", 00:21:55.854 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:55.854 "is_configured": true, 00:21:55.854 "data_offset": 2048, 00:21:55.854 "data_size": 63488 00:21:55.854 }, 00:21:55.854 { 00:21:55.854 "name": "pt2", 00:21:55.854 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:55.854 "is_configured": true, 00:21:55.854 "data_offset": 2048, 00:21:55.854 "data_size": 63488 00:21:55.854 }, 00:21:55.854 { 00:21:55.854 "name": "pt3", 00:21:55.854 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:55.854 "is_configured": true, 00:21:55.854 "data_offset": 2048, 00:21:55.854 "data_size": 63488 00:21:55.854 }, 00:21:55.854 { 00:21:55.854 "name": "pt4", 00:21:55.854 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:55.854 "is_configured": true, 00:21:55.854 "data_offset": 2048, 00:21:55.854 "data_size": 63488 00:21:55.854 } 00:21:55.854 ] 00:21:55.854 }' 00:21:55.854 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:55.854 13:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.422 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:21:56.422 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:56.422 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:56.422 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:56.422 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:56.422 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:56.422 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:56.422 13:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:56.681 [2024-07-26 13:21:37.136112] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.681 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:56.681 "name": "raid_bdev1", 00:21:56.681 "aliases": [ 00:21:56.681 "987c2e81-27c6-4377-a55f-d98f28d47d12" 00:21:56.681 ], 00:21:56.681 "product_name": "Raid Volume", 00:21:56.681 "block_size": 512, 00:21:56.681 "num_blocks": 63488, 00:21:56.681 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:21:56.681 "assigned_rate_limits": { 00:21:56.681 "rw_ios_per_sec": 0, 00:21:56.681 "rw_mbytes_per_sec": 0, 00:21:56.681 "r_mbytes_per_sec": 0, 00:21:56.681 "w_mbytes_per_sec": 0 00:21:56.681 }, 00:21:56.681 "claimed": false, 00:21:56.681 "zoned": false, 00:21:56.681 "supported_io_types": { 00:21:56.681 "read": true, 00:21:56.681 "write": true, 00:21:56.681 "unmap": false, 00:21:56.681 "flush": false, 00:21:56.681 "reset": true, 00:21:56.681 "nvme_admin": false, 00:21:56.681 "nvme_io": false, 00:21:56.681 "nvme_io_md": false, 00:21:56.681 "write_zeroes": true, 00:21:56.681 "zcopy": false, 00:21:56.681 "get_zone_info": false, 00:21:56.681 "zone_management": false, 00:21:56.681 "zone_append": false, 00:21:56.681 "compare": false, 00:21:56.681 "compare_and_write": false, 00:21:56.681 "abort": false, 00:21:56.681 "seek_hole": false, 00:21:56.681 "seek_data": false, 00:21:56.681 "copy": false, 00:21:56.681 "nvme_iov_md": false 00:21:56.681 }, 00:21:56.681 "memory_domains": [ 00:21:56.681 { 00:21:56.681 "dma_device_id": "system", 00:21:56.681 "dma_device_type": 1 00:21:56.681 }, 00:21:56.681 { 00:21:56.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.681 "dma_device_type": 2 00:21:56.681 }, 00:21:56.681 { 00:21:56.681 "dma_device_id": "system", 00:21:56.681 "dma_device_type": 1 00:21:56.681 }, 00:21:56.681 { 00:21:56.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.681 "dma_device_type": 2 00:21:56.681 }, 00:21:56.681 { 00:21:56.681 "dma_device_id": "system", 00:21:56.681 "dma_device_type": 1 00:21:56.681 }, 00:21:56.681 { 00:21:56.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.681 "dma_device_type": 2 00:21:56.681 }, 00:21:56.681 { 00:21:56.681 "dma_device_id": "system", 00:21:56.681 "dma_device_type": 1 00:21:56.681 }, 00:21:56.681 { 00:21:56.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.681 "dma_device_type": 2 00:21:56.681 } 00:21:56.681 ], 00:21:56.681 "driver_specific": { 00:21:56.681 "raid": { 00:21:56.681 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:21:56.681 "strip_size_kb": 0, 00:21:56.681 "state": "online", 00:21:56.681 "raid_level": "raid1", 00:21:56.681 "superblock": true, 00:21:56.681 "num_base_bdevs": 4, 00:21:56.681 "num_base_bdevs_discovered": 4, 00:21:56.681 "num_base_bdevs_operational": 4, 00:21:56.681 "base_bdevs_list": [ 00:21:56.681 { 00:21:56.681 "name": "pt1", 00:21:56.681 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:56.681 "is_configured": true, 00:21:56.681 "data_offset": 2048, 00:21:56.681 "data_size": 63488 00:21:56.681 }, 00:21:56.681 { 00:21:56.681 "name": "pt2", 00:21:56.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:56.682 "is_configured": true, 00:21:56.682 "data_offset": 2048, 00:21:56.682 "data_size": 63488 00:21:56.682 }, 00:21:56.682 { 00:21:56.682 "name": "pt3", 00:21:56.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:56.682 "is_configured": true, 00:21:56.682 "data_offset": 2048, 00:21:56.682 "data_size": 63488 00:21:56.682 }, 00:21:56.682 { 00:21:56.682 "name": "pt4", 00:21:56.682 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:56.682 "is_configured": true, 00:21:56.682 "data_offset": 2048, 00:21:56.682 "data_size": 63488 00:21:56.682 } 00:21:56.682 ] 00:21:56.682 } 00:21:56.682 } 00:21:56.682 }' 00:21:56.682 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:56.682 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:56.682 pt2 00:21:56.682 pt3 00:21:56.682 pt4' 00:21:56.682 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:56.682 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:56.941 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:56.941 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:56.941 "name": "pt1", 00:21:56.941 "aliases": [ 00:21:56.941 "00000000-0000-0000-0000-000000000001" 00:21:56.941 ], 00:21:56.941 "product_name": "passthru", 00:21:56.941 "block_size": 512, 00:21:56.941 "num_blocks": 65536, 00:21:56.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:56.941 "assigned_rate_limits": { 00:21:56.941 "rw_ios_per_sec": 0, 00:21:56.941 "rw_mbytes_per_sec": 0, 00:21:56.941 "r_mbytes_per_sec": 0, 00:21:56.941 "w_mbytes_per_sec": 0 00:21:56.941 }, 00:21:56.941 "claimed": true, 00:21:56.941 "claim_type": "exclusive_write", 00:21:56.941 "zoned": false, 00:21:56.941 "supported_io_types": { 00:21:56.941 "read": true, 00:21:56.941 "write": true, 00:21:56.941 "unmap": true, 00:21:56.941 "flush": true, 00:21:56.941 "reset": true, 00:21:56.941 "nvme_admin": false, 00:21:56.941 "nvme_io": false, 00:21:56.941 "nvme_io_md": false, 00:21:56.941 "write_zeroes": true, 00:21:56.941 "zcopy": true, 00:21:56.941 "get_zone_info": false, 00:21:56.941 "zone_management": false, 00:21:56.941 "zone_append": false, 00:21:56.941 "compare": false, 00:21:56.941 "compare_and_write": false, 00:21:56.941 "abort": true, 00:21:56.941 "seek_hole": false, 00:21:56.941 "seek_data": false, 00:21:56.941 "copy": true, 00:21:56.941 "nvme_iov_md": false 00:21:56.941 }, 00:21:56.941 "memory_domains": [ 00:21:56.941 { 00:21:56.941 "dma_device_id": "system", 00:21:56.941 "dma_device_type": 1 00:21:56.941 }, 00:21:56.941 { 00:21:56.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.941 "dma_device_type": 2 00:21:56.941 } 00:21:56.941 ], 00:21:56.941 "driver_specific": { 00:21:56.941 "passthru": { 00:21:56.941 "name": "pt1", 00:21:56.941 "base_bdev_name": "malloc1" 00:21:56.941 } 00:21:56.941 } 00:21:56.941 }' 00:21:56.941 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:57.200 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:57.200 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:57.200 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:57.200 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:57.201 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:57.201 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:57.201 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:57.201 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:57.201 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:57.460 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:57.460 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:57.460 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:57.460 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:57.460 13:21:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:57.719 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:57.719 "name": "pt2", 00:21:57.719 "aliases": [ 00:21:57.719 "00000000-0000-0000-0000-000000000002" 00:21:57.719 ], 00:21:57.719 "product_name": "passthru", 00:21:57.719 "block_size": 512, 00:21:57.719 "num_blocks": 65536, 00:21:57.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:57.719 "assigned_rate_limits": { 00:21:57.719 "rw_ios_per_sec": 0, 00:21:57.719 "rw_mbytes_per_sec": 0, 00:21:57.719 "r_mbytes_per_sec": 0, 00:21:57.719 "w_mbytes_per_sec": 0 00:21:57.719 }, 00:21:57.719 "claimed": true, 00:21:57.719 "claim_type": "exclusive_write", 00:21:57.719 "zoned": false, 00:21:57.719 "supported_io_types": { 00:21:57.719 "read": true, 00:21:57.719 "write": true, 00:21:57.719 "unmap": true, 00:21:57.719 "flush": true, 00:21:57.719 "reset": true, 00:21:57.719 "nvme_admin": false, 00:21:57.719 "nvme_io": false, 00:21:57.719 "nvme_io_md": false, 00:21:57.719 "write_zeroes": true, 00:21:57.719 "zcopy": true, 00:21:57.719 "get_zone_info": false, 00:21:57.719 "zone_management": false, 00:21:57.719 "zone_append": false, 00:21:57.719 "compare": false, 00:21:57.719 "compare_and_write": false, 00:21:57.719 "abort": true, 00:21:57.719 "seek_hole": false, 00:21:57.719 "seek_data": false, 00:21:57.719 "copy": true, 00:21:57.719 "nvme_iov_md": false 00:21:57.719 }, 00:21:57.719 "memory_domains": [ 00:21:57.719 { 00:21:57.719 "dma_device_id": "system", 00:21:57.719 "dma_device_type": 1 00:21:57.719 }, 00:21:57.719 { 00:21:57.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.719 "dma_device_type": 2 00:21:57.719 } 00:21:57.719 ], 00:21:57.719 "driver_specific": { 00:21:57.719 "passthru": { 00:21:57.719 "name": "pt2", 00:21:57.719 "base_bdev_name": "malloc2" 00:21:57.719 } 00:21:57.719 } 00:21:57.719 }' 00:21:57.719 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:57.719 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:57.719 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:57.719 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:57.719 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:57.719 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:57.719 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:57.719 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:57.719 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:57.719 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:57.979 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:57.979 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:57.979 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:57.979 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:57.979 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:58.238 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:58.238 "name": "pt3", 00:21:58.238 "aliases": [ 00:21:58.238 "00000000-0000-0000-0000-000000000003" 00:21:58.238 ], 00:21:58.238 "product_name": "passthru", 00:21:58.238 "block_size": 512, 00:21:58.238 "num_blocks": 65536, 00:21:58.238 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:58.238 "assigned_rate_limits": { 00:21:58.238 "rw_ios_per_sec": 0, 00:21:58.238 "rw_mbytes_per_sec": 0, 00:21:58.238 "r_mbytes_per_sec": 0, 00:21:58.238 "w_mbytes_per_sec": 0 00:21:58.238 }, 00:21:58.238 "claimed": true, 00:21:58.238 "claim_type": "exclusive_write", 00:21:58.238 "zoned": false, 00:21:58.238 "supported_io_types": { 00:21:58.238 "read": true, 00:21:58.238 "write": true, 00:21:58.238 "unmap": true, 00:21:58.238 "flush": true, 00:21:58.238 "reset": true, 00:21:58.238 "nvme_admin": false, 00:21:58.238 "nvme_io": false, 00:21:58.238 "nvme_io_md": false, 00:21:58.238 "write_zeroes": true, 00:21:58.238 "zcopy": true, 00:21:58.238 "get_zone_info": false, 00:21:58.238 "zone_management": false, 00:21:58.238 "zone_append": false, 00:21:58.238 "compare": false, 00:21:58.238 "compare_and_write": false, 00:21:58.238 "abort": true, 00:21:58.238 "seek_hole": false, 00:21:58.238 "seek_data": false, 00:21:58.238 "copy": true, 00:21:58.238 "nvme_iov_md": false 00:21:58.238 }, 00:21:58.238 "memory_domains": [ 00:21:58.238 { 00:21:58.238 "dma_device_id": "system", 00:21:58.238 "dma_device_type": 1 00:21:58.238 }, 00:21:58.238 { 00:21:58.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.238 "dma_device_type": 2 00:21:58.238 } 00:21:58.238 ], 00:21:58.238 "driver_specific": { 00:21:58.238 "passthru": { 00:21:58.238 "name": "pt3", 00:21:58.238 "base_bdev_name": "malloc3" 00:21:58.238 } 00:21:58.238 } 00:21:58.238 }' 00:21:58.238 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.238 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.238 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:58.238 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.238 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.238 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:58.238 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.239 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:58.498 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:58.498 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.498 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:58.498 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:58.498 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:58.498 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:21:58.498 13:21:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:58.757 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:58.757 "name": "pt4", 00:21:58.757 "aliases": [ 00:21:58.757 "00000000-0000-0000-0000-000000000004" 00:21:58.757 ], 00:21:58.757 "product_name": "passthru", 00:21:58.757 "block_size": 512, 00:21:58.757 "num_blocks": 65536, 00:21:58.757 "uuid": "00000000-0000-0000-0000-000000000004", 00:21:58.757 "assigned_rate_limits": { 00:21:58.757 "rw_ios_per_sec": 0, 00:21:58.757 "rw_mbytes_per_sec": 0, 00:21:58.757 "r_mbytes_per_sec": 0, 00:21:58.757 "w_mbytes_per_sec": 0 00:21:58.757 }, 00:21:58.757 "claimed": true, 00:21:58.757 "claim_type": "exclusive_write", 00:21:58.757 "zoned": false, 00:21:58.757 "supported_io_types": { 00:21:58.757 "read": true, 00:21:58.757 "write": true, 00:21:58.757 "unmap": true, 00:21:58.757 "flush": true, 00:21:58.757 "reset": true, 00:21:58.757 "nvme_admin": false, 00:21:58.757 "nvme_io": false, 00:21:58.757 "nvme_io_md": false, 00:21:58.757 "write_zeroes": true, 00:21:58.757 "zcopy": true, 00:21:58.757 "get_zone_info": false, 00:21:58.757 "zone_management": false, 00:21:58.757 "zone_append": false, 00:21:58.757 "compare": false, 00:21:58.757 "compare_and_write": false, 00:21:58.757 "abort": true, 00:21:58.757 "seek_hole": false, 00:21:58.757 "seek_data": false, 00:21:58.757 "copy": true, 00:21:58.757 "nvme_iov_md": false 00:21:58.757 }, 00:21:58.757 "memory_domains": [ 00:21:58.757 { 00:21:58.757 "dma_device_id": "system", 00:21:58.757 "dma_device_type": 1 00:21:58.757 }, 00:21:58.757 { 00:21:58.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.758 "dma_device_type": 2 00:21:58.758 } 00:21:58.758 ], 00:21:58.758 "driver_specific": { 00:21:58.758 "passthru": { 00:21:58.758 "name": "pt4", 00:21:58.758 "base_bdev_name": "malloc4" 00:21:58.758 } 00:21:58.758 } 00:21:58.758 }' 00:21:58.758 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.758 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:58.758 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:58.758 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:58.758 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:59.017 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:59.017 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.017 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:59.017 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:59.017 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.017 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:59.017 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:59.017 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:59.017 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:21:59.276 [2024-07-26 13:21:39.674803] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:59.276 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=987c2e81-27c6-4377-a55f-d98f28d47d12 00:21:59.276 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 987c2e81-27c6-4377-a55f-d98f28d47d12 ']' 00:21:59.276 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:59.535 [2024-07-26 13:21:39.903128] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:59.535 [2024-07-26 13:21:39.903150] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.535 [2024-07-26 13:21:39.903191] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.535 [2024-07-26 13:21:39.903263] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:59.535 [2024-07-26 13:21:39.903274] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc8c560 name raid_bdev1, state offline 00:21:59.535 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.535 13:21:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:21:59.794 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:21:59.794 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:21:59.794 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:59.794 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:00.052 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:00.052 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:00.310 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:00.310 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:00.310 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:22:00.310 13:21:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:00.569 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:00.569 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:00.828 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:22:00.829 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:00.829 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:22:00.829 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:00.829 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:00.829 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:00.829 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:00.829 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:00.829 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:00.829 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:00.829 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:22:00.829 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:22:00.829 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:01.088 [2024-07-26 13:21:41.495423] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:01.088 [2024-07-26 13:21:41.496733] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:01.088 [2024-07-26 13:21:41.496772] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:01.088 [2024-07-26 13:21:41.496804] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:01.088 [2024-07-26 13:21:41.496848] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:01.088 [2024-07-26 13:21:41.496885] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:01.088 [2024-07-26 13:21:41.496907] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:01.088 [2024-07-26 13:21:41.496927] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:01.088 [2024-07-26 13:21:41.496943] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:01.088 [2024-07-26 13:21:41.496953] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc8c530 name raid_bdev1, state configuring 00:22:01.088 request: 00:22:01.088 { 00:22:01.088 "name": "raid_bdev1", 00:22:01.088 "raid_level": "raid1", 00:22:01.088 "base_bdevs": [ 00:22:01.088 "malloc1", 00:22:01.088 "malloc2", 00:22:01.088 "malloc3", 00:22:01.088 "malloc4" 00:22:01.088 ], 00:22:01.088 "superblock": false, 00:22:01.088 "method": "bdev_raid_create", 00:22:01.088 "req_id": 1 00:22:01.088 } 00:22:01.088 Got JSON-RPC error response 00:22:01.088 response: 00:22:01.088 { 00:22:01.088 "code": -17, 00:22:01.088 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:01.088 } 00:22:01.088 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:22:01.088 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:01.088 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:01.088 13:21:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:01.088 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.088 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:22:01.347 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:22:01.347 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:22:01.347 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:01.606 [2024-07-26 13:21:41.952573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:01.606 [2024-07-26 13:21:41.952615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.606 [2024-07-26 13:21:41.952631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe36d50 00:22:01.606 [2024-07-26 13:21:41.952642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.606 [2024-07-26 13:21:41.954113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.606 [2024-07-26 13:21:41.954150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:01.606 [2024-07-26 13:21:41.954215] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:01.606 [2024-07-26 13:21:41.954241] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:01.606 pt1 00:22:01.606 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:01.606 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:01.606 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:01.606 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:01.606 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:01.606 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:01.606 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:01.606 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:01.606 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:01.606 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:01.606 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.606 13:21:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.865 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:01.865 "name": "raid_bdev1", 00:22:01.865 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:22:01.865 "strip_size_kb": 0, 00:22:01.865 "state": "configuring", 00:22:01.865 "raid_level": "raid1", 00:22:01.865 "superblock": true, 00:22:01.865 "num_base_bdevs": 4, 00:22:01.865 "num_base_bdevs_discovered": 1, 00:22:01.865 "num_base_bdevs_operational": 4, 00:22:01.865 "base_bdevs_list": [ 00:22:01.865 { 00:22:01.865 "name": "pt1", 00:22:01.865 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:01.865 "is_configured": true, 00:22:01.865 "data_offset": 2048, 00:22:01.865 "data_size": 63488 00:22:01.865 }, 00:22:01.865 { 00:22:01.865 "name": null, 00:22:01.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:01.865 "is_configured": false, 00:22:01.865 "data_offset": 2048, 00:22:01.865 "data_size": 63488 00:22:01.865 }, 00:22:01.865 { 00:22:01.865 "name": null, 00:22:01.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:01.865 "is_configured": false, 00:22:01.865 "data_offset": 2048, 00:22:01.865 "data_size": 63488 00:22:01.865 }, 00:22:01.865 { 00:22:01.865 "name": null, 00:22:01.865 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:01.865 "is_configured": false, 00:22:01.865 "data_offset": 2048, 00:22:01.865 "data_size": 63488 00:22:01.865 } 00:22:01.865 ] 00:22:01.865 }' 00:22:01.865 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:01.865 13:21:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.486 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:22:02.486 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:02.486 [2024-07-26 13:21:42.899099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:02.486 [2024-07-26 13:21:42.899156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.486 [2024-07-26 13:21:42.899174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc8b0e0 00:22:02.486 [2024-07-26 13:21:42.899185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.486 [2024-07-26 13:21:42.899490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.486 [2024-07-26 13:21:42.899506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:02.486 [2024-07-26 13:21:42.899559] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:02.486 [2024-07-26 13:21:42.899577] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:02.486 pt2 00:22:02.486 13:21:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:02.746 [2024-07-26 13:21:43.071557] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.746 "name": "raid_bdev1", 00:22:02.746 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:22:02.746 "strip_size_kb": 0, 00:22:02.746 "state": "configuring", 00:22:02.746 "raid_level": "raid1", 00:22:02.746 "superblock": true, 00:22:02.746 "num_base_bdevs": 4, 00:22:02.746 "num_base_bdevs_discovered": 1, 00:22:02.746 "num_base_bdevs_operational": 4, 00:22:02.746 "base_bdevs_list": [ 00:22:02.746 { 00:22:02.746 "name": "pt1", 00:22:02.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:02.746 "is_configured": true, 00:22:02.746 "data_offset": 2048, 00:22:02.746 "data_size": 63488 00:22:02.746 }, 00:22:02.746 { 00:22:02.746 "name": null, 00:22:02.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:02.746 "is_configured": false, 00:22:02.746 "data_offset": 2048, 00:22:02.746 "data_size": 63488 00:22:02.746 }, 00:22:02.746 { 00:22:02.746 "name": null, 00:22:02.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:02.746 "is_configured": false, 00:22:02.746 "data_offset": 2048, 00:22:02.746 "data_size": 63488 00:22:02.746 }, 00:22:02.746 { 00:22:02.746 "name": null, 00:22:02.746 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:02.746 "is_configured": false, 00:22:02.746 "data_offset": 2048, 00:22:02.746 "data_size": 63488 00:22:02.746 } 00:22:02.746 ] 00:22:02.746 }' 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.746 13:21:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.313 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:22:03.313 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:03.313 13:21:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:03.572 [2024-07-26 13:21:44.026058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:03.572 [2024-07-26 13:21:44.026102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.572 [2024-07-26 13:21:44.026118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc8b310 00:22:03.572 [2024-07-26 13:21:44.026129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.572 [2024-07-26 13:21:44.026446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.572 [2024-07-26 13:21:44.026463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:03.572 [2024-07-26 13:21:44.026519] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:03.572 [2024-07-26 13:21:44.026537] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:03.572 pt2 00:22:03.572 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:03.572 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:03.572 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:03.832 [2024-07-26 13:21:44.238617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:03.832 [2024-07-26 13:21:44.238644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.832 [2024-07-26 13:21:44.238659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc8d880 00:22:03.832 [2024-07-26 13:21:44.238670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.832 [2024-07-26 13:21:44.238932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.832 [2024-07-26 13:21:44.238947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:03.832 [2024-07-26 13:21:44.238992] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:03.832 [2024-07-26 13:21:44.239008] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:03.832 pt3 00:22:03.832 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:03.832 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:03.832 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:04.091 [2024-07-26 13:21:44.455194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:04.091 [2024-07-26 13:21:44.455222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:04.091 [2024-07-26 13:21:44.455235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc93520 00:22:04.091 [2024-07-26 13:21:44.455245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:04.091 [2024-07-26 13:21:44.455501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:04.091 [2024-07-26 13:21:44.455516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:04.091 [2024-07-26 13:21:44.455560] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:04.091 [2024-07-26 13:21:44.455576] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:04.091 [2024-07-26 13:21:44.455687] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc8cc20 00:22:04.091 [2024-07-26 13:21:44.455697] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:04.091 [2024-07-26 13:21:44.455843] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe459c0 00:22:04.091 [2024-07-26 13:21:44.455966] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc8cc20 00:22:04.091 [2024-07-26 13:21:44.455975] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc8cc20 00:22:04.091 [2024-07-26 13:21:44.456061] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:04.091 pt4 00:22:04.091 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:22:04.091 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:22:04.091 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:04.091 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:04.091 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:04.091 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:04.091 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:04.091 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:04.091 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.091 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.092 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.092 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.092 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.092 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.351 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.351 "name": "raid_bdev1", 00:22:04.351 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:22:04.351 "strip_size_kb": 0, 00:22:04.351 "state": "online", 00:22:04.351 "raid_level": "raid1", 00:22:04.351 "superblock": true, 00:22:04.351 "num_base_bdevs": 4, 00:22:04.351 "num_base_bdevs_discovered": 4, 00:22:04.351 "num_base_bdevs_operational": 4, 00:22:04.351 "base_bdevs_list": [ 00:22:04.351 { 00:22:04.351 "name": "pt1", 00:22:04.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:04.351 "is_configured": true, 00:22:04.351 "data_offset": 2048, 00:22:04.351 "data_size": 63488 00:22:04.351 }, 00:22:04.351 { 00:22:04.351 "name": "pt2", 00:22:04.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:04.351 "is_configured": true, 00:22:04.351 "data_offset": 2048, 00:22:04.351 "data_size": 63488 00:22:04.351 }, 00:22:04.351 { 00:22:04.351 "name": "pt3", 00:22:04.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:04.351 "is_configured": true, 00:22:04.351 "data_offset": 2048, 00:22:04.351 "data_size": 63488 00:22:04.351 }, 00:22:04.351 { 00:22:04.351 "name": "pt4", 00:22:04.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:04.351 "is_configured": true, 00:22:04.351 "data_offset": 2048, 00:22:04.351 "data_size": 63488 00:22:04.351 } 00:22:04.351 ] 00:22:04.351 }' 00:22:04.351 13:21:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.351 13:21:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:04.918 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:22:04.918 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:04.918 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:04.918 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:04.918 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:04.918 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:04.918 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:04.918 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:05.177 [2024-07-26 13:21:45.482196] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:05.177 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:05.177 "name": "raid_bdev1", 00:22:05.177 "aliases": [ 00:22:05.177 "987c2e81-27c6-4377-a55f-d98f28d47d12" 00:22:05.177 ], 00:22:05.177 "product_name": "Raid Volume", 00:22:05.177 "block_size": 512, 00:22:05.177 "num_blocks": 63488, 00:22:05.177 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:22:05.177 "assigned_rate_limits": { 00:22:05.177 "rw_ios_per_sec": 0, 00:22:05.177 "rw_mbytes_per_sec": 0, 00:22:05.177 "r_mbytes_per_sec": 0, 00:22:05.177 "w_mbytes_per_sec": 0 00:22:05.177 }, 00:22:05.177 "claimed": false, 00:22:05.178 "zoned": false, 00:22:05.178 "supported_io_types": { 00:22:05.178 "read": true, 00:22:05.178 "write": true, 00:22:05.178 "unmap": false, 00:22:05.178 "flush": false, 00:22:05.178 "reset": true, 00:22:05.178 "nvme_admin": false, 00:22:05.178 "nvme_io": false, 00:22:05.178 "nvme_io_md": false, 00:22:05.178 "write_zeroes": true, 00:22:05.178 "zcopy": false, 00:22:05.178 "get_zone_info": false, 00:22:05.178 "zone_management": false, 00:22:05.178 "zone_append": false, 00:22:05.178 "compare": false, 00:22:05.178 "compare_and_write": false, 00:22:05.178 "abort": false, 00:22:05.178 "seek_hole": false, 00:22:05.178 "seek_data": false, 00:22:05.178 "copy": false, 00:22:05.178 "nvme_iov_md": false 00:22:05.178 }, 00:22:05.178 "memory_domains": [ 00:22:05.178 { 00:22:05.178 "dma_device_id": "system", 00:22:05.178 "dma_device_type": 1 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.178 "dma_device_type": 2 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "dma_device_id": "system", 00:22:05.178 "dma_device_type": 1 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.178 "dma_device_type": 2 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "dma_device_id": "system", 00:22:05.178 "dma_device_type": 1 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.178 "dma_device_type": 2 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "dma_device_id": "system", 00:22:05.178 "dma_device_type": 1 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.178 "dma_device_type": 2 00:22:05.178 } 00:22:05.178 ], 00:22:05.178 "driver_specific": { 00:22:05.178 "raid": { 00:22:05.178 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:22:05.178 "strip_size_kb": 0, 00:22:05.178 "state": "online", 00:22:05.178 "raid_level": "raid1", 00:22:05.178 "superblock": true, 00:22:05.178 "num_base_bdevs": 4, 00:22:05.178 "num_base_bdevs_discovered": 4, 00:22:05.178 "num_base_bdevs_operational": 4, 00:22:05.178 "base_bdevs_list": [ 00:22:05.178 { 00:22:05.178 "name": "pt1", 00:22:05.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:05.178 "is_configured": true, 00:22:05.178 "data_offset": 2048, 00:22:05.178 "data_size": 63488 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "name": "pt2", 00:22:05.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:05.178 "is_configured": true, 00:22:05.178 "data_offset": 2048, 00:22:05.178 "data_size": 63488 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "name": "pt3", 00:22:05.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:05.178 "is_configured": true, 00:22:05.178 "data_offset": 2048, 00:22:05.178 "data_size": 63488 00:22:05.178 }, 00:22:05.178 { 00:22:05.178 "name": "pt4", 00:22:05.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:05.178 "is_configured": true, 00:22:05.178 "data_offset": 2048, 00:22:05.178 "data_size": 63488 00:22:05.178 } 00:22:05.178 ] 00:22:05.178 } 00:22:05.178 } 00:22:05.178 }' 00:22:05.178 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:05.178 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:05.178 pt2 00:22:05.178 pt3 00:22:05.178 pt4' 00:22:05.178 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:05.178 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:05.178 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.437 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.437 "name": "pt1", 00:22:05.437 "aliases": [ 00:22:05.437 "00000000-0000-0000-0000-000000000001" 00:22:05.437 ], 00:22:05.437 "product_name": "passthru", 00:22:05.437 "block_size": 512, 00:22:05.437 "num_blocks": 65536, 00:22:05.437 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:05.437 "assigned_rate_limits": { 00:22:05.437 "rw_ios_per_sec": 0, 00:22:05.437 "rw_mbytes_per_sec": 0, 00:22:05.437 "r_mbytes_per_sec": 0, 00:22:05.437 "w_mbytes_per_sec": 0 00:22:05.437 }, 00:22:05.437 "claimed": true, 00:22:05.437 "claim_type": "exclusive_write", 00:22:05.437 "zoned": false, 00:22:05.437 "supported_io_types": { 00:22:05.437 "read": true, 00:22:05.437 "write": true, 00:22:05.437 "unmap": true, 00:22:05.437 "flush": true, 00:22:05.437 "reset": true, 00:22:05.437 "nvme_admin": false, 00:22:05.437 "nvme_io": false, 00:22:05.437 "nvme_io_md": false, 00:22:05.437 "write_zeroes": true, 00:22:05.437 "zcopy": true, 00:22:05.437 "get_zone_info": false, 00:22:05.437 "zone_management": false, 00:22:05.437 "zone_append": false, 00:22:05.437 "compare": false, 00:22:05.437 "compare_and_write": false, 00:22:05.437 "abort": true, 00:22:05.437 "seek_hole": false, 00:22:05.437 "seek_data": false, 00:22:05.437 "copy": true, 00:22:05.437 "nvme_iov_md": false 00:22:05.437 }, 00:22:05.437 "memory_domains": [ 00:22:05.437 { 00:22:05.437 "dma_device_id": "system", 00:22:05.437 "dma_device_type": 1 00:22:05.437 }, 00:22:05.437 { 00:22:05.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.437 "dma_device_type": 2 00:22:05.437 } 00:22:05.437 ], 00:22:05.437 "driver_specific": { 00:22:05.437 "passthru": { 00:22:05.437 "name": "pt1", 00:22:05.438 "base_bdev_name": "malloc1" 00:22:05.438 } 00:22:05.438 } 00:22:05.438 }' 00:22:05.438 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.438 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.438 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:05.438 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.438 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.438 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:05.438 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.697 13:21:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:05.697 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:05.697 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.697 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:05.697 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:05.697 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:05.697 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:05.697 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:05.957 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:05.957 "name": "pt2", 00:22:05.957 "aliases": [ 00:22:05.957 "00000000-0000-0000-0000-000000000002" 00:22:05.957 ], 00:22:05.957 "product_name": "passthru", 00:22:05.957 "block_size": 512, 00:22:05.957 "num_blocks": 65536, 00:22:05.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:05.957 "assigned_rate_limits": { 00:22:05.957 "rw_ios_per_sec": 0, 00:22:05.957 "rw_mbytes_per_sec": 0, 00:22:05.957 "r_mbytes_per_sec": 0, 00:22:05.957 "w_mbytes_per_sec": 0 00:22:05.957 }, 00:22:05.957 "claimed": true, 00:22:05.957 "claim_type": "exclusive_write", 00:22:05.957 "zoned": false, 00:22:05.957 "supported_io_types": { 00:22:05.957 "read": true, 00:22:05.957 "write": true, 00:22:05.957 "unmap": true, 00:22:05.957 "flush": true, 00:22:05.957 "reset": true, 00:22:05.957 "nvme_admin": false, 00:22:05.957 "nvme_io": false, 00:22:05.957 "nvme_io_md": false, 00:22:05.957 "write_zeroes": true, 00:22:05.957 "zcopy": true, 00:22:05.957 "get_zone_info": false, 00:22:05.957 "zone_management": false, 00:22:05.957 "zone_append": false, 00:22:05.957 "compare": false, 00:22:05.957 "compare_and_write": false, 00:22:05.957 "abort": true, 00:22:05.957 "seek_hole": false, 00:22:05.957 "seek_data": false, 00:22:05.957 "copy": true, 00:22:05.957 "nvme_iov_md": false 00:22:05.957 }, 00:22:05.957 "memory_domains": [ 00:22:05.957 { 00:22:05.957 "dma_device_id": "system", 00:22:05.957 "dma_device_type": 1 00:22:05.957 }, 00:22:05.957 { 00:22:05.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.957 "dma_device_type": 2 00:22:05.957 } 00:22:05.957 ], 00:22:05.957 "driver_specific": { 00:22:05.957 "passthru": { 00:22:05.957 "name": "pt2", 00:22:05.957 "base_bdev_name": "malloc2" 00:22:05.957 } 00:22:05.957 } 00:22:05.957 }' 00:22:05.957 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.957 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:05.957 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:05.957 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:05.957 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.217 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:06.217 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.217 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.217 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:06.217 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.217 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.217 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:06.217 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:06.217 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:06.217 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:06.477 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:06.477 "name": "pt3", 00:22:06.477 "aliases": [ 00:22:06.477 "00000000-0000-0000-0000-000000000003" 00:22:06.477 ], 00:22:06.477 "product_name": "passthru", 00:22:06.477 "block_size": 512, 00:22:06.477 "num_blocks": 65536, 00:22:06.477 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:06.477 "assigned_rate_limits": { 00:22:06.477 "rw_ios_per_sec": 0, 00:22:06.477 "rw_mbytes_per_sec": 0, 00:22:06.477 "r_mbytes_per_sec": 0, 00:22:06.477 "w_mbytes_per_sec": 0 00:22:06.477 }, 00:22:06.477 "claimed": true, 00:22:06.477 "claim_type": "exclusive_write", 00:22:06.477 "zoned": false, 00:22:06.477 "supported_io_types": { 00:22:06.477 "read": true, 00:22:06.477 "write": true, 00:22:06.477 "unmap": true, 00:22:06.477 "flush": true, 00:22:06.477 "reset": true, 00:22:06.477 "nvme_admin": false, 00:22:06.477 "nvme_io": false, 00:22:06.477 "nvme_io_md": false, 00:22:06.477 "write_zeroes": true, 00:22:06.477 "zcopy": true, 00:22:06.477 "get_zone_info": false, 00:22:06.477 "zone_management": false, 00:22:06.477 "zone_append": false, 00:22:06.477 "compare": false, 00:22:06.477 "compare_and_write": false, 00:22:06.477 "abort": true, 00:22:06.477 "seek_hole": false, 00:22:06.477 "seek_data": false, 00:22:06.477 "copy": true, 00:22:06.477 "nvme_iov_md": false 00:22:06.477 }, 00:22:06.477 "memory_domains": [ 00:22:06.477 { 00:22:06.477 "dma_device_id": "system", 00:22:06.477 "dma_device_type": 1 00:22:06.477 }, 00:22:06.477 { 00:22:06.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.477 "dma_device_type": 2 00:22:06.477 } 00:22:06.477 ], 00:22:06.477 "driver_specific": { 00:22:06.477 "passthru": { 00:22:06.477 "name": "pt3", 00:22:06.477 "base_bdev_name": "malloc3" 00:22:06.477 } 00:22:06.477 } 00:22:06.477 }' 00:22:06.477 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.477 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.477 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:06.477 13:21:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.737 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:06.737 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:06.737 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.737 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:06.737 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:06.737 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.737 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:06.737 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:06.737 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:06.737 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:06.737 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:06.997 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:06.997 "name": "pt4", 00:22:06.997 "aliases": [ 00:22:06.997 "00000000-0000-0000-0000-000000000004" 00:22:06.997 ], 00:22:06.997 "product_name": "passthru", 00:22:06.997 "block_size": 512, 00:22:06.997 "num_blocks": 65536, 00:22:06.997 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:06.997 "assigned_rate_limits": { 00:22:06.997 "rw_ios_per_sec": 0, 00:22:06.997 "rw_mbytes_per_sec": 0, 00:22:06.997 "r_mbytes_per_sec": 0, 00:22:06.997 "w_mbytes_per_sec": 0 00:22:06.997 }, 00:22:06.997 "claimed": true, 00:22:06.997 "claim_type": "exclusive_write", 00:22:06.997 "zoned": false, 00:22:06.997 "supported_io_types": { 00:22:06.997 "read": true, 00:22:06.997 "write": true, 00:22:06.997 "unmap": true, 00:22:06.997 "flush": true, 00:22:06.997 "reset": true, 00:22:06.997 "nvme_admin": false, 00:22:06.997 "nvme_io": false, 00:22:06.997 "nvme_io_md": false, 00:22:06.997 "write_zeroes": true, 00:22:06.997 "zcopy": true, 00:22:06.997 "get_zone_info": false, 00:22:06.997 "zone_management": false, 00:22:06.997 "zone_append": false, 00:22:06.997 "compare": false, 00:22:06.997 "compare_and_write": false, 00:22:06.997 "abort": true, 00:22:06.997 "seek_hole": false, 00:22:06.997 "seek_data": false, 00:22:06.997 "copy": true, 00:22:06.997 "nvme_iov_md": false 00:22:06.997 }, 00:22:06.997 "memory_domains": [ 00:22:06.997 { 00:22:06.997 "dma_device_id": "system", 00:22:06.997 "dma_device_type": 1 00:22:06.997 }, 00:22:06.997 { 00:22:06.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.997 "dma_device_type": 2 00:22:06.997 } 00:22:06.997 ], 00:22:06.997 "driver_specific": { 00:22:06.997 "passthru": { 00:22:06.997 "name": "pt4", 00:22:06.997 "base_bdev_name": "malloc4" 00:22:06.997 } 00:22:06.997 } 00:22:06.997 }' 00:22:06.997 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:06.997 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:07.257 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:07.257 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:07.257 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:07.257 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:07.257 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:07.257 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:07.257 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:07.257 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:07.257 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:07.516 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:07.516 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:07.516 13:21:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:22:07.516 [2024-07-26 13:21:47.996807] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:07.516 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 987c2e81-27c6-4377-a55f-d98f28d47d12 '!=' 987c2e81-27c6-4377-a55f-d98f28d47d12 ']' 00:22:07.516 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:22:07.517 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:07.517 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:07.517 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:07.776 [2024-07-26 13:21:48.225157] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:07.776 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:07.776 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:07.776 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:07.776 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:07.776 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:07.776 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:07.776 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.776 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.776 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.776 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.776 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.776 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.036 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.036 "name": "raid_bdev1", 00:22:08.036 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:22:08.036 "strip_size_kb": 0, 00:22:08.036 "state": "online", 00:22:08.036 "raid_level": "raid1", 00:22:08.036 "superblock": true, 00:22:08.036 "num_base_bdevs": 4, 00:22:08.036 "num_base_bdevs_discovered": 3, 00:22:08.036 "num_base_bdevs_operational": 3, 00:22:08.036 "base_bdevs_list": [ 00:22:08.036 { 00:22:08.036 "name": null, 00:22:08.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.036 "is_configured": false, 00:22:08.036 "data_offset": 2048, 00:22:08.036 "data_size": 63488 00:22:08.036 }, 00:22:08.036 { 00:22:08.036 "name": "pt2", 00:22:08.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:08.036 "is_configured": true, 00:22:08.036 "data_offset": 2048, 00:22:08.036 "data_size": 63488 00:22:08.036 }, 00:22:08.036 { 00:22:08.036 "name": "pt3", 00:22:08.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:08.036 "is_configured": true, 00:22:08.036 "data_offset": 2048, 00:22:08.036 "data_size": 63488 00:22:08.036 }, 00:22:08.036 { 00:22:08.036 "name": "pt4", 00:22:08.036 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:08.036 "is_configured": true, 00:22:08.036 "data_offset": 2048, 00:22:08.036 "data_size": 63488 00:22:08.036 } 00:22:08.036 ] 00:22:08.036 }' 00:22:08.036 13:21:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.036 13:21:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.604 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:08.863 [2024-07-26 13:21:49.195759] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:08.863 [2024-07-26 13:21:49.195784] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:08.863 [2024-07-26 13:21:49.195837] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:08.863 [2024-07-26 13:21:49.195902] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:08.863 [2024-07-26 13:21:49.195913] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc8cc20 name raid_bdev1, state offline 00:22:08.863 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.863 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:22:09.123 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:22:09.123 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:22:09.123 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:09.123 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:22:09.123 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:09.382 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:09.382 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:22:09.382 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:09.382 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:09.382 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:22:09.382 13:21:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:09.641 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:09.641 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:22:09.641 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:22:09.641 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:22:09.641 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:09.900 [2024-07-26 13:21:50.294585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:09.900 [2024-07-26 13:21:50.294628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:09.900 [2024-07-26 13:21:50.294643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc8b310 00:22:09.900 [2024-07-26 13:21:50.294655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:09.900 [2024-07-26 13:21:50.296135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:09.900 [2024-07-26 13:21:50.296169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:09.900 [2024-07-26 13:21:50.296228] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:09.900 [2024-07-26 13:21:50.296254] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:09.900 pt2 00:22:09.900 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:09.900 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:09.900 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:09.900 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:09.900 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:09.900 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:09.900 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:09.900 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:09.900 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:09.900 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:09.900 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.900 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.160 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.160 "name": "raid_bdev1", 00:22:10.160 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:22:10.160 "strip_size_kb": 0, 00:22:10.160 "state": "configuring", 00:22:10.160 "raid_level": "raid1", 00:22:10.160 "superblock": true, 00:22:10.160 "num_base_bdevs": 4, 00:22:10.160 "num_base_bdevs_discovered": 1, 00:22:10.160 "num_base_bdevs_operational": 3, 00:22:10.160 "base_bdevs_list": [ 00:22:10.160 { 00:22:10.160 "name": null, 00:22:10.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.160 "is_configured": false, 00:22:10.160 "data_offset": 2048, 00:22:10.160 "data_size": 63488 00:22:10.160 }, 00:22:10.160 { 00:22:10.160 "name": "pt2", 00:22:10.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:10.160 "is_configured": true, 00:22:10.160 "data_offset": 2048, 00:22:10.160 "data_size": 63488 00:22:10.160 }, 00:22:10.160 { 00:22:10.160 "name": null, 00:22:10.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:10.160 "is_configured": false, 00:22:10.160 "data_offset": 2048, 00:22:10.160 "data_size": 63488 00:22:10.160 }, 00:22:10.160 { 00:22:10.160 "name": null, 00:22:10.160 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:10.160 "is_configured": false, 00:22:10.160 "data_offset": 2048, 00:22:10.160 "data_size": 63488 00:22:10.160 } 00:22:10.160 ] 00:22:10.160 }' 00:22:10.160 13:21:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.160 13:21:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.727 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:22:10.727 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:22:10.727 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:10.987 [2024-07-26 13:21:51.309272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:10.987 [2024-07-26 13:21:51.309316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.988 [2024-07-26 13:21:51.309332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc8ab00 00:22:10.988 [2024-07-26 13:21:51.309344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.988 [2024-07-26 13:21:51.309658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.988 [2024-07-26 13:21:51.309675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:10.988 [2024-07-26 13:21:51.309729] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:10.988 [2024-07-26 13:21:51.309746] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:10.988 pt3 00:22:10.988 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:10.988 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:10.988 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:10.988 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:10.988 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:10.988 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:10.988 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:10.988 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:10.988 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:10.988 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:10.988 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.988 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.247 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:11.247 "name": "raid_bdev1", 00:22:11.247 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:22:11.247 "strip_size_kb": 0, 00:22:11.247 "state": "configuring", 00:22:11.247 "raid_level": "raid1", 00:22:11.247 "superblock": true, 00:22:11.247 "num_base_bdevs": 4, 00:22:11.247 "num_base_bdevs_discovered": 2, 00:22:11.247 "num_base_bdevs_operational": 3, 00:22:11.247 "base_bdevs_list": [ 00:22:11.247 { 00:22:11.247 "name": null, 00:22:11.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.247 "is_configured": false, 00:22:11.247 "data_offset": 2048, 00:22:11.247 "data_size": 63488 00:22:11.247 }, 00:22:11.247 { 00:22:11.247 "name": "pt2", 00:22:11.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:11.247 "is_configured": true, 00:22:11.247 "data_offset": 2048, 00:22:11.247 "data_size": 63488 00:22:11.247 }, 00:22:11.247 { 00:22:11.247 "name": "pt3", 00:22:11.247 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:11.247 "is_configured": true, 00:22:11.247 "data_offset": 2048, 00:22:11.247 "data_size": 63488 00:22:11.247 }, 00:22:11.247 { 00:22:11.247 "name": null, 00:22:11.247 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:11.247 "is_configured": false, 00:22:11.247 "data_offset": 2048, 00:22:11.247 "data_size": 63488 00:22:11.247 } 00:22:11.247 ] 00:22:11.247 }' 00:22:11.247 13:21:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:11.247 13:21:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.815 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:22:11.815 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:22:11.815 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:22:11.815 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:11.815 [2024-07-26 13:21:52.323943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:11.815 [2024-07-26 13:21:52.323989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.815 [2024-07-26 13:21:52.324007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc914e0 00:22:11.815 [2024-07-26 13:21:52.324018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.815 [2024-07-26 13:21:52.324341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.815 [2024-07-26 13:21:52.324359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:11.815 [2024-07-26 13:21:52.324414] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:11.815 [2024-07-26 13:21:52.324433] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:11.815 [2024-07-26 13:21:52.324540] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc918d0 00:22:11.815 [2024-07-26 13:21:52.324549] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:11.815 [2024-07-26 13:21:52.324706] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc8c8d0 00:22:11.815 [2024-07-26 13:21:52.324828] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc918d0 00:22:11.815 [2024-07-26 13:21:52.324838] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc918d0 00:22:11.815 [2024-07-26 13:21:52.324927] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.815 pt4 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.074 "name": "raid_bdev1", 00:22:12.074 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:22:12.074 "strip_size_kb": 0, 00:22:12.074 "state": "online", 00:22:12.074 "raid_level": "raid1", 00:22:12.074 "superblock": true, 00:22:12.074 "num_base_bdevs": 4, 00:22:12.074 "num_base_bdevs_discovered": 3, 00:22:12.074 "num_base_bdevs_operational": 3, 00:22:12.074 "base_bdevs_list": [ 00:22:12.074 { 00:22:12.074 "name": null, 00:22:12.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.074 "is_configured": false, 00:22:12.074 "data_offset": 2048, 00:22:12.074 "data_size": 63488 00:22:12.074 }, 00:22:12.074 { 00:22:12.074 "name": "pt2", 00:22:12.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:12.074 "is_configured": true, 00:22:12.074 "data_offset": 2048, 00:22:12.074 "data_size": 63488 00:22:12.074 }, 00:22:12.074 { 00:22:12.074 "name": "pt3", 00:22:12.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:12.074 "is_configured": true, 00:22:12.074 "data_offset": 2048, 00:22:12.074 "data_size": 63488 00:22:12.074 }, 00:22:12.074 { 00:22:12.074 "name": "pt4", 00:22:12.074 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:12.074 "is_configured": true, 00:22:12.074 "data_offset": 2048, 00:22:12.074 "data_size": 63488 00:22:12.074 } 00:22:12.074 ] 00:22:12.074 }' 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.074 13:21:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.640 13:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:12.915 [2024-07-26 13:21:53.346629] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:12.915 [2024-07-26 13:21:53.346652] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:12.915 [2024-07-26 13:21:53.346703] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:12.915 [2024-07-26 13:21:53.346763] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:12.915 [2024-07-26 13:21:53.346774] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc918d0 name raid_bdev1, state offline 00:22:12.915 13:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.915 13:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:22:13.175 13:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:22:13.175 13:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:22:13.175 13:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:22:13.175 13:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:22:13.175 13:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:13.433 13:21:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:13.693 [2024-07-26 13:21:54.032396] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:13.693 [2024-07-26 13:21:54.032436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.693 [2024-07-26 13:21:54.032452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc8c530 00:22:13.693 [2024-07-26 13:21:54.032468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.693 [2024-07-26 13:21:54.033954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.693 [2024-07-26 13:21:54.033981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:13.693 [2024-07-26 13:21:54.034038] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:13.693 [2024-07-26 13:21:54.034061] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:13.693 [2024-07-26 13:21:54.034159] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:13.693 [2024-07-26 13:21:54.034172] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:13.693 [2024-07-26 13:21:54.034185] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc91cc0 name raid_bdev1, state configuring 00:22:13.693 [2024-07-26 13:21:54.034206] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:13.693 [2024-07-26 13:21:54.034272] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:13.693 pt1 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.693 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.952 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.952 "name": "raid_bdev1", 00:22:13.952 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:22:13.952 "strip_size_kb": 0, 00:22:13.952 "state": "configuring", 00:22:13.952 "raid_level": "raid1", 00:22:13.952 "superblock": true, 00:22:13.952 "num_base_bdevs": 4, 00:22:13.952 "num_base_bdevs_discovered": 2, 00:22:13.952 "num_base_bdevs_operational": 3, 00:22:13.952 "base_bdevs_list": [ 00:22:13.952 { 00:22:13.952 "name": null, 00:22:13.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.952 "is_configured": false, 00:22:13.952 "data_offset": 2048, 00:22:13.952 "data_size": 63488 00:22:13.952 }, 00:22:13.952 { 00:22:13.952 "name": "pt2", 00:22:13.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:13.952 "is_configured": true, 00:22:13.952 "data_offset": 2048, 00:22:13.952 "data_size": 63488 00:22:13.952 }, 00:22:13.952 { 00:22:13.952 "name": "pt3", 00:22:13.952 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:13.952 "is_configured": true, 00:22:13.952 "data_offset": 2048, 00:22:13.952 "data_size": 63488 00:22:13.952 }, 00:22:13.952 { 00:22:13.952 "name": null, 00:22:13.952 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:13.952 "is_configured": false, 00:22:13.952 "data_offset": 2048, 00:22:13.952 "data_size": 63488 00:22:13.952 } 00:22:13.952 ] 00:22:13.952 }' 00:22:13.952 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.952 13:21:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.518 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:22:14.518 13:21:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:14.518 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:22:14.518 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:14.777 [2024-07-26 13:21:55.219522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:14.777 [2024-07-26 13:21:55.219567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.777 [2024-07-26 13:21:55.219585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc92690 00:22:14.777 [2024-07-26 13:21:55.219597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.777 [2024-07-26 13:21:55.219917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.777 [2024-07-26 13:21:55.219933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:14.777 [2024-07-26 13:21:55.219988] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:14.777 [2024-07-26 13:21:55.220007] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:14.777 [2024-07-26 13:21:55.220111] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xc89c20 00:22:14.777 [2024-07-26 13:21:55.220120] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:14.777 [2024-07-26 13:21:55.220285] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc92ad0 00:22:14.777 [2024-07-26 13:21:55.220408] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc89c20 00:22:14.777 [2024-07-26 13:21:55.220418] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc89c20 00:22:14.777 [2024-07-26 13:21:55.220508] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.777 pt4 00:22:14.777 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:14.777 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:14.777 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:14.777 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:14.777 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:14.777 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:14.777 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.777 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.777 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.777 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.777 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.777 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.037 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:15.037 "name": "raid_bdev1", 00:22:15.037 "uuid": "987c2e81-27c6-4377-a55f-d98f28d47d12", 00:22:15.037 "strip_size_kb": 0, 00:22:15.037 "state": "online", 00:22:15.037 "raid_level": "raid1", 00:22:15.037 "superblock": true, 00:22:15.037 "num_base_bdevs": 4, 00:22:15.037 "num_base_bdevs_discovered": 3, 00:22:15.037 "num_base_bdevs_operational": 3, 00:22:15.037 "base_bdevs_list": [ 00:22:15.037 { 00:22:15.037 "name": null, 00:22:15.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.037 "is_configured": false, 00:22:15.037 "data_offset": 2048, 00:22:15.037 "data_size": 63488 00:22:15.037 }, 00:22:15.037 { 00:22:15.037 "name": "pt2", 00:22:15.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.037 "is_configured": true, 00:22:15.037 "data_offset": 2048, 00:22:15.037 "data_size": 63488 00:22:15.037 }, 00:22:15.037 { 00:22:15.037 "name": "pt3", 00:22:15.037 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:15.037 "is_configured": true, 00:22:15.037 "data_offset": 2048, 00:22:15.037 "data_size": 63488 00:22:15.037 }, 00:22:15.037 { 00:22:15.037 "name": "pt4", 00:22:15.037 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:15.037 "is_configured": true, 00:22:15.037 "data_offset": 2048, 00:22:15.037 "data_size": 63488 00:22:15.037 } 00:22:15.037 ] 00:22:15.037 }' 00:22:15.037 13:21:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:15.037 13:21:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.655 13:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:22:15.655 13:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:15.914 13:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:22:15.914 13:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:15.914 13:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:22:16.174 [2024-07-26 13:21:56.467048] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:16.174 13:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 987c2e81-27c6-4377-a55f-d98f28d47d12 '!=' 987c2e81-27c6-4377-a55f-d98f28d47d12 ']' 00:22:16.174 13:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 774115 00:22:16.174 13:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 774115 ']' 00:22:16.174 13:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 774115 00:22:16.174 13:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:22:16.174 13:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.174 13:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 774115 00:22:16.174 13:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:16.174 13:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:16.174 13:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 774115' 00:22:16.174 killing process with pid 774115 00:22:16.174 13:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 774115 00:22:16.174 [2024-07-26 13:21:56.522909] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:16.174 [2024-07-26 13:21:56.522963] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.174 [2024-07-26 13:21:56.523025] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.174 [2024-07-26 13:21:56.523036] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc89c20 name raid_bdev1, state offline 00:22:16.174 13:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 774115 00:22:16.174 [2024-07-26 13:21:56.555936] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:16.434 13:21:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:22:16.434 00:22:16.434 real 0m23.180s 00:22:16.434 user 0m42.908s 00:22:16.434 sys 0m4.257s 00:22:16.434 13:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:16.434 13:21:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.434 ************************************ 00:22:16.434 END TEST raid_superblock_test 00:22:16.434 ************************************ 00:22:16.434 13:21:56 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:22:16.434 13:21:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:16.434 13:21:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:16.434 13:21:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:16.434 ************************************ 00:22:16.434 START TEST raid_read_error_test 00:22:16.434 ************************************ 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.tbVkLVFNNI 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=778597 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 778597 /var/tmp/spdk-raid.sock 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 778597 ']' 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:16.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.434 13:21:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.434 [2024-07-26 13:21:56.890028] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:22:16.434 [2024-07-26 13:21:56.890083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778597 ] 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:16.693 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:16.693 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:16.693 [2024-07-26 13:21:57.022014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.693 [2024-07-26 13:21:57.109388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.693 [2024-07-26 13:21:57.163881] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.693 [2024-07-26 13:21:57.163906] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:16.952 13:21:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:16.952 13:21:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:22:16.952 13:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:16.952 13:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:17.212 BaseBdev1_malloc 00:22:17.212 13:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:17.471 true 00:22:17.471 13:21:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:17.731 [2024-07-26 13:21:58.022440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:17.731 [2024-07-26 13:21:58.022481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.731 [2024-07-26 13:21:58.022499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1af8190 00:22:17.731 [2024-07-26 13:21:58.022510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.731 [2024-07-26 13:21:58.024096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.731 [2024-07-26 13:21:58.024123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:17.731 BaseBdev1 00:22:17.731 13:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:17.731 13:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:17.731 BaseBdev2_malloc 00:22:17.990 13:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:17.990 true 00:22:17.990 13:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:18.249 [2024-07-26 13:21:58.704593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:18.249 [2024-07-26 13:21:58.704632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.249 [2024-07-26 13:21:58.704650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1afce20 00:22:18.249 [2024-07-26 13:21:58.704661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.249 [2024-07-26 13:21:58.706043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.249 [2024-07-26 13:21:58.706069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:18.249 BaseBdev2 00:22:18.249 13:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:18.249 13:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:18.509 BaseBdev3_malloc 00:22:18.509 13:21:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:18.768 true 00:22:18.768 13:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:19.027 [2024-07-26 13:21:59.382696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:19.027 [2024-07-26 13:21:59.382735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.027 [2024-07-26 13:21:59.382756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1afdd90 00:22:19.027 [2024-07-26 13:21:59.382768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.027 [2024-07-26 13:21:59.384135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.027 [2024-07-26 13:21:59.384174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:19.027 BaseBdev3 00:22:19.027 13:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:19.027 13:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:19.286 BaseBdev4_malloc 00:22:19.286 13:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:19.546 true 00:22:19.546 13:21:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:19.546 [2024-07-26 13:22:00.056684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:19.546 [2024-07-26 13:22:00.056729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:19.546 [2024-07-26 13:22:00.056748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b00000 00:22:19.546 [2024-07-26 13:22:00.056759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:19.546 [2024-07-26 13:22:00.058216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:19.546 [2024-07-26 13:22:00.058241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:19.546 BaseBdev4 00:22:19.805 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:19.805 [2024-07-26 13:22:00.285304] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:19.805 [2024-07-26 13:22:00.286510] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:19.805 [2024-07-26 13:22:00.286573] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:19.805 [2024-07-26 13:22:00.286626] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:19.805 [2024-07-26 13:22:00.286829] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b00dd0 00:22:19.805 [2024-07-26 13:22:00.286839] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:19.805 [2024-07-26 13:22:00.287029] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b056d0 00:22:19.805 [2024-07-26 13:22:00.287178] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b00dd0 00:22:19.805 [2024-07-26 13:22:00.287188] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b00dd0 00:22:19.805 [2024-07-26 13:22:00.287299] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.805 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:19.805 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:19.805 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:19.805 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:19.805 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:19.806 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:19.806 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.806 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.806 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.806 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.806 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.806 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.064 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.064 "name": "raid_bdev1", 00:22:20.064 "uuid": "a698a72c-9fbf-49cc-821b-ec7c70140e27", 00:22:20.064 "strip_size_kb": 0, 00:22:20.064 "state": "online", 00:22:20.064 "raid_level": "raid1", 00:22:20.064 "superblock": true, 00:22:20.064 "num_base_bdevs": 4, 00:22:20.064 "num_base_bdevs_discovered": 4, 00:22:20.064 "num_base_bdevs_operational": 4, 00:22:20.064 "base_bdevs_list": [ 00:22:20.064 { 00:22:20.064 "name": "BaseBdev1", 00:22:20.064 "uuid": "e4d8b718-2b04-5aca-bcf0-8edb3f85c362", 00:22:20.064 "is_configured": true, 00:22:20.064 "data_offset": 2048, 00:22:20.064 "data_size": 63488 00:22:20.064 }, 00:22:20.064 { 00:22:20.064 "name": "BaseBdev2", 00:22:20.064 "uuid": "ccb0f865-467b-5b35-903b-ca00d4beca01", 00:22:20.064 "is_configured": true, 00:22:20.064 "data_offset": 2048, 00:22:20.064 "data_size": 63488 00:22:20.064 }, 00:22:20.064 { 00:22:20.064 "name": "BaseBdev3", 00:22:20.064 "uuid": "ae0e45e4-2736-5e4a-8fde-a1aa16b2eda6", 00:22:20.064 "is_configured": true, 00:22:20.064 "data_offset": 2048, 00:22:20.064 "data_size": 63488 00:22:20.064 }, 00:22:20.064 { 00:22:20.064 "name": "BaseBdev4", 00:22:20.064 "uuid": "5e1dccbc-c328-598b-aadc-7c72ca74dd06", 00:22:20.064 "is_configured": true, 00:22:20.064 "data_offset": 2048, 00:22:20.064 "data_size": 63488 00:22:20.064 } 00:22:20.064 ] 00:22:20.064 }' 00:22:20.064 13:22:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.064 13:22:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.632 13:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:22:20.632 13:22:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:20.892 [2024-07-26 13:22:01.159853] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b02080 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.831 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.091 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:22.091 "name": "raid_bdev1", 00:22:22.091 "uuid": "a698a72c-9fbf-49cc-821b-ec7c70140e27", 00:22:22.091 "strip_size_kb": 0, 00:22:22.091 "state": "online", 00:22:22.091 "raid_level": "raid1", 00:22:22.091 "superblock": true, 00:22:22.091 "num_base_bdevs": 4, 00:22:22.091 "num_base_bdevs_discovered": 4, 00:22:22.091 "num_base_bdevs_operational": 4, 00:22:22.091 "base_bdevs_list": [ 00:22:22.091 { 00:22:22.091 "name": "BaseBdev1", 00:22:22.091 "uuid": "e4d8b718-2b04-5aca-bcf0-8edb3f85c362", 00:22:22.091 "is_configured": true, 00:22:22.091 "data_offset": 2048, 00:22:22.091 "data_size": 63488 00:22:22.091 }, 00:22:22.091 { 00:22:22.091 "name": "BaseBdev2", 00:22:22.091 "uuid": "ccb0f865-467b-5b35-903b-ca00d4beca01", 00:22:22.091 "is_configured": true, 00:22:22.091 "data_offset": 2048, 00:22:22.091 "data_size": 63488 00:22:22.091 }, 00:22:22.091 { 00:22:22.091 "name": "BaseBdev3", 00:22:22.091 "uuid": "ae0e45e4-2736-5e4a-8fde-a1aa16b2eda6", 00:22:22.091 "is_configured": true, 00:22:22.091 "data_offset": 2048, 00:22:22.091 "data_size": 63488 00:22:22.091 }, 00:22:22.091 { 00:22:22.091 "name": "BaseBdev4", 00:22:22.091 "uuid": "5e1dccbc-c328-598b-aadc-7c72ca74dd06", 00:22:22.091 "is_configured": true, 00:22:22.091 "data_offset": 2048, 00:22:22.091 "data_size": 63488 00:22:22.091 } 00:22:22.091 ] 00:22:22.091 }' 00:22:22.091 13:22:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:22.091 13:22:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.660 13:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:22.932 [2024-07-26 13:22:03.323425] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:22.932 [2024-07-26 13:22:03.323466] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:22.932 [2024-07-26 13:22:03.326359] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.932 [2024-07-26 13:22:03.326399] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.932 [2024-07-26 13:22:03.326505] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.932 [2024-07-26 13:22:03.326516] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b00dd0 name raid_bdev1, state offline 00:22:22.932 0 00:22:22.932 13:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 778597 00:22:22.932 13:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 778597 ']' 00:22:22.932 13:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 778597 00:22:22.932 13:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:22:22.932 13:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.932 13:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 778597 00:22:22.932 13:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:22.932 13:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:22.932 13:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 778597' 00:22:22.932 killing process with pid 778597 00:22:22.932 13:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 778597 00:22:22.932 [2024-07-26 13:22:03.401107] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:22.932 13:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 778597 00:22:22.932 [2024-07-26 13:22:03.428500] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:23.191 13:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:22:23.191 13:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.tbVkLVFNNI 00:22:23.191 13:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:22:23.191 13:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:22:23.192 13:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:22:23.192 13:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:23.192 13:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:23.192 13:22:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:23.192 00:22:23.192 real 0m6.807s 00:22:23.192 user 0m11.153s 00:22:23.192 sys 0m1.276s 00:22:23.192 13:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.192 13:22:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.192 ************************************ 00:22:23.192 END TEST raid_read_error_test 00:22:23.192 ************************************ 00:22:23.192 13:22:03 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:22:23.192 13:22:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:23.192 13:22:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:23.192 13:22:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:23.192 ************************************ 00:22:23.192 START TEST raid_write_error_test 00:22:23.192 ************************************ 00:22:23.192 13:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:22:23.192 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:22:23.192 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.cmCJsxjTHP 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=779759 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 779759 /var/tmp/spdk-raid.sock 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 779759 ']' 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:23.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.451 13:22:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:23.451 [2024-07-26 13:22:03.784418] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:22:23.451 [2024-07-26 13:22:03.784473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779759 ] 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:23.451 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:23.451 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:23.451 [2024-07-26 13:22:03.915984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.710 [2024-07-26 13:22:04.002906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.710 [2024-07-26 13:22:04.055885] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:23.710 [2024-07-26 13:22:04.055916] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:24.279 13:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.279 13:22:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:22:24.279 13:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:24.279 13:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:24.279 BaseBdev1_malloc 00:22:24.279 13:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:24.538 true 00:22:24.538 13:22:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:24.798 [2024-07-26 13:22:05.119061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:24.798 [2024-07-26 13:22:05.119101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.798 [2024-07-26 13:22:05.119119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd26190 00:22:24.798 [2024-07-26 13:22:05.119130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.798 [2024-07-26 13:22:05.120696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.798 [2024-07-26 13:22:05.120724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:24.798 BaseBdev1 00:22:24.798 13:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:24.798 13:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:24.798 BaseBdev2_malloc 00:22:24.798 13:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:25.057 true 00:22:25.057 13:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:25.316 [2024-07-26 13:22:05.588659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:25.316 [2024-07-26 13:22:05.588695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.316 [2024-07-26 13:22:05.588713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd2ae20 00:22:25.316 [2024-07-26 13:22:05.588724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.316 [2024-07-26 13:22:05.590054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.316 [2024-07-26 13:22:05.590081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:25.316 BaseBdev2 00:22:25.316 13:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:25.316 13:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:25.316 BaseBdev3_malloc 00:22:25.316 13:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:25.576 true 00:22:25.576 13:22:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:25.835 [2024-07-26 13:22:06.114549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:25.835 [2024-07-26 13:22:06.114590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.835 [2024-07-26 13:22:06.114612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd2bd90 00:22:25.835 [2024-07-26 13:22:06.114624] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.835 [2024-07-26 13:22:06.116004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.835 [2024-07-26 13:22:06.116031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:25.835 BaseBdev3 00:22:25.835 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:22:25.835 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:25.835 BaseBdev4_malloc 00:22:25.835 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:26.095 true 00:22:26.095 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:26.355 [2024-07-26 13:22:06.644167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:26.355 [2024-07-26 13:22:06.644205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:26.355 [2024-07-26 13:22:06.644224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd2e000 00:22:26.355 [2024-07-26 13:22:06.644236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:26.355 [2024-07-26 13:22:06.645613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:26.355 [2024-07-26 13:22:06.645640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:26.355 BaseBdev4 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:26.355 [2024-07-26 13:22:06.856761] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:26.355 [2024-07-26 13:22:06.857924] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:26.355 [2024-07-26 13:22:06.857988] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:26.355 [2024-07-26 13:22:06.858041] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:26.355 [2024-07-26 13:22:06.858245] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd2edd0 00:22:26.355 [2024-07-26 13:22:06.858256] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:26.355 [2024-07-26 13:22:06.858443] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd336d0 00:22:26.355 [2024-07-26 13:22:06.858582] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd2edd0 00:22:26.355 [2024-07-26 13:22:06.858592] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd2edd0 00:22:26.355 [2024-07-26 13:22:06.858698] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.355 13:22:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.924 13:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.924 "name": "raid_bdev1", 00:22:26.924 "uuid": "c25c8abf-f8c9-47b2-9340-5e96dabed50c", 00:22:26.924 "strip_size_kb": 0, 00:22:26.924 "state": "online", 00:22:26.924 "raid_level": "raid1", 00:22:26.924 "superblock": true, 00:22:26.924 "num_base_bdevs": 4, 00:22:26.924 "num_base_bdevs_discovered": 4, 00:22:26.924 "num_base_bdevs_operational": 4, 00:22:26.924 "base_bdevs_list": [ 00:22:26.924 { 00:22:26.924 "name": "BaseBdev1", 00:22:26.924 "uuid": "f2ae1382-24e5-5672-b158-04ab19290c80", 00:22:26.924 "is_configured": true, 00:22:26.924 "data_offset": 2048, 00:22:26.924 "data_size": 63488 00:22:26.924 }, 00:22:26.924 { 00:22:26.924 "name": "BaseBdev2", 00:22:26.924 "uuid": "d535ed64-da7f-5e5a-a4d6-fc4b63770ee6", 00:22:26.924 "is_configured": true, 00:22:26.924 "data_offset": 2048, 00:22:26.924 "data_size": 63488 00:22:26.924 }, 00:22:26.924 { 00:22:26.924 "name": "BaseBdev3", 00:22:26.924 "uuid": "77b4b3ad-4cbb-548c-86cd-3a91c9beaf8d", 00:22:26.924 "is_configured": true, 00:22:26.924 "data_offset": 2048, 00:22:26.924 "data_size": 63488 00:22:26.924 }, 00:22:26.924 { 00:22:26.924 "name": "BaseBdev4", 00:22:26.924 "uuid": "54d9a9da-b6d1-54c9-9a4b-249521913836", 00:22:26.924 "is_configured": true, 00:22:26.924 "data_offset": 2048, 00:22:26.924 "data_size": 63488 00:22:26.924 } 00:22:26.924 ] 00:22:26.924 }' 00:22:26.924 13:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.924 13:22:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.493 13:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:22:27.493 13:22:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:27.752 [2024-07-26 13:22:08.032113] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd30080 00:22:28.744 13:22:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:28.744 [2024-07-26 13:22:09.146232] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:22:28.744 [2024-07-26 13:22:09.146284] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:28.744 [2024-07-26 13:22:09.146494] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xd30080 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=3 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.744 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.004 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:29.004 "name": "raid_bdev1", 00:22:29.004 "uuid": "c25c8abf-f8c9-47b2-9340-5e96dabed50c", 00:22:29.004 "strip_size_kb": 0, 00:22:29.004 "state": "online", 00:22:29.004 "raid_level": "raid1", 00:22:29.004 "superblock": true, 00:22:29.004 "num_base_bdevs": 4, 00:22:29.004 "num_base_bdevs_discovered": 3, 00:22:29.004 "num_base_bdevs_operational": 3, 00:22:29.004 "base_bdevs_list": [ 00:22:29.004 { 00:22:29.004 "name": null, 00:22:29.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.004 "is_configured": false, 00:22:29.004 "data_offset": 2048, 00:22:29.004 "data_size": 63488 00:22:29.004 }, 00:22:29.004 { 00:22:29.004 "name": "BaseBdev2", 00:22:29.004 "uuid": "d535ed64-da7f-5e5a-a4d6-fc4b63770ee6", 00:22:29.004 "is_configured": true, 00:22:29.004 "data_offset": 2048, 00:22:29.004 "data_size": 63488 00:22:29.004 }, 00:22:29.004 { 00:22:29.004 "name": "BaseBdev3", 00:22:29.004 "uuid": "77b4b3ad-4cbb-548c-86cd-3a91c9beaf8d", 00:22:29.004 "is_configured": true, 00:22:29.004 "data_offset": 2048, 00:22:29.004 "data_size": 63488 00:22:29.004 }, 00:22:29.004 { 00:22:29.004 "name": "BaseBdev4", 00:22:29.004 "uuid": "54d9a9da-b6d1-54c9-9a4b-249521913836", 00:22:29.004 "is_configured": true, 00:22:29.004 "data_offset": 2048, 00:22:29.004 "data_size": 63488 00:22:29.004 } 00:22:29.004 ] 00:22:29.004 }' 00:22:29.004 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:29.004 13:22:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.572 13:22:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:29.831 [2024-07-26 13:22:10.106775] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:29.831 [2024-07-26 13:22:10.106806] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:29.831 [2024-07-26 13:22:10.109689] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:29.831 [2024-07-26 13:22:10.109725] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:29.831 [2024-07-26 13:22:10.109813] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:29.831 [2024-07-26 13:22:10.109824] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd2edd0 name raid_bdev1, state offline 00:22:29.831 0 00:22:29.831 13:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 779759 00:22:29.831 13:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 779759 ']' 00:22:29.831 13:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 779759 00:22:29.831 13:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:22:29.831 13:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:29.831 13:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 779759 00:22:29.831 13:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:29.831 13:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:29.831 13:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 779759' 00:22:29.831 killing process with pid 779759 00:22:29.831 13:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 779759 00:22:29.831 [2024-07-26 13:22:10.173970] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:29.831 13:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 779759 00:22:29.831 [2024-07-26 13:22:10.200176] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:30.091 13:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.cmCJsxjTHP 00:22:30.091 13:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:22:30.091 13:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:22:30.091 13:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:22:30.091 13:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:22:30.091 13:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:30.091 13:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:30.091 13:22:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:30.091 00:22:30.091 real 0m6.691s 00:22:30.091 user 0m10.556s 00:22:30.091 sys 0m1.163s 00:22:30.091 13:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:30.091 13:22:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.091 ************************************ 00:22:30.091 END TEST raid_write_error_test 00:22:30.091 ************************************ 00:22:30.091 13:22:10 bdev_raid -- bdev/bdev_raid.sh@955 -- # '[' true = true ']' 00:22:30.091 13:22:10 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:22:30.091 13:22:10 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:22:30.091 13:22:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:30.091 13:22:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:30.091 13:22:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:30.091 ************************************ 00:22:30.091 START TEST raid_rebuild_test 00:22:30.091 ************************************ 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=780998 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 780998 /var/tmp/spdk-raid.sock 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 780998 ']' 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:30.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:30.091 13:22:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.091 [2024-07-26 13:22:10.556133] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:22:30.091 [2024-07-26 13:22:10.556200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780998 ] 00:22:30.091 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:30.091 Zero copy mechanism will not be used. 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:30.351 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:30.351 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:30.351 [2024-07-26 13:22:10.687125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.351 [2024-07-26 13:22:10.772884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.351 [2024-07-26 13:22:10.826097] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:30.351 [2024-07-26 13:22:10.826127] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:31.289 13:22:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.289 13:22:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:22:31.289 13:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:31.289 13:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:31.289 BaseBdev1_malloc 00:22:31.289 13:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:31.549 [2024-07-26 13:22:11.905314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:31.549 [2024-07-26 13:22:11.905357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.549 [2024-07-26 13:22:11.905378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf195f0 00:22:31.549 [2024-07-26 13:22:11.905390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.549 [2024-07-26 13:22:11.906908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.549 [2024-07-26 13:22:11.906936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:31.549 BaseBdev1 00:22:31.549 13:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:31.549 13:22:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:31.808 BaseBdev2_malloc 00:22:31.808 13:22:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:32.067 [2024-07-26 13:22:12.363055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:32.067 [2024-07-26 13:22:12.363095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.067 [2024-07-26 13:22:12.363112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10bd130 00:22:32.067 [2024-07-26 13:22:12.363123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.067 [2024-07-26 13:22:12.364543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.067 [2024-07-26 13:22:12.364570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:32.067 BaseBdev2 00:22:32.067 13:22:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:32.327 spare_malloc 00:22:32.327 13:22:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:32.327 spare_delay 00:22:32.327 13:22:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:32.586 [2024-07-26 13:22:13.045211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:32.586 [2024-07-26 13:22:13.045254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:32.586 [2024-07-26 13:22:13.045271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10bc770 00:22:32.586 [2024-07-26 13:22:13.045282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:32.586 [2024-07-26 13:22:13.046674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:32.586 [2024-07-26 13:22:13.046702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:32.586 spare 00:22:32.586 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:32.850 [2024-07-26 13:22:13.269810] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:32.850 [2024-07-26 13:22:13.270974] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:32.850 [2024-07-26 13:22:13.271041] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xf11270 00:22:32.850 [2024-07-26 13:22:13.271051] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:32.850 [2024-07-26 13:22:13.271254] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10bd3c0 00:22:32.850 [2024-07-26 13:22:13.271381] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf11270 00:22:32.850 [2024-07-26 13:22:13.271391] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf11270 00:22:32.850 [2024-07-26 13:22:13.271495] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.850 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:32.850 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:32.850 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:32.850 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:32.850 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:32.850 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:32.851 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.851 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.851 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.851 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.851 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.851 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.124 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.124 "name": "raid_bdev1", 00:22:33.124 "uuid": "af46832b-cb49-4a5c-8f16-29ee46ee054d", 00:22:33.124 "strip_size_kb": 0, 00:22:33.124 "state": "online", 00:22:33.124 "raid_level": "raid1", 00:22:33.124 "superblock": false, 00:22:33.124 "num_base_bdevs": 2, 00:22:33.124 "num_base_bdevs_discovered": 2, 00:22:33.124 "num_base_bdevs_operational": 2, 00:22:33.124 "base_bdevs_list": [ 00:22:33.124 { 00:22:33.124 "name": "BaseBdev1", 00:22:33.124 "uuid": "8f394f68-300d-57e6-9b3d-51c811ff4566", 00:22:33.124 "is_configured": true, 00:22:33.124 "data_offset": 0, 00:22:33.124 "data_size": 65536 00:22:33.124 }, 00:22:33.124 { 00:22:33.124 "name": "BaseBdev2", 00:22:33.124 "uuid": "e65d40f7-e604-5a72-a2f1-99c88a50b8ce", 00:22:33.124 "is_configured": true, 00:22:33.124 "data_offset": 0, 00:22:33.124 "data_size": 65536 00:22:33.124 } 00:22:33.124 ] 00:22:33.124 }' 00:22:33.124 13:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.124 13:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.693 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:33.693 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:22:33.952 [2024-07-26 13:22:14.308767] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:33.952 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:22:33.952 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.952 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:34.211 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:34.471 [2024-07-26 13:22:14.769798] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10bd3c0 00:22:34.471 /dev/nbd0 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:34.471 1+0 records in 00:22:34.471 1+0 records out 00:22:34.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224088 s, 18.3 MB/s 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:22:34.471 13:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:38.664 65536+0 records in 00:22:38.664 65536+0 records out 00:22:38.664 33554432 bytes (34 MB, 32 MiB) copied, 4.13068 s, 8.1 MB/s 00:22:38.664 13:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:38.664 13:22:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:38.664 13:22:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:38.664 13:22:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:38.664 13:22:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:38.664 13:22:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:38.664 13:22:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:38.664 13:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:38.664 13:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:38.664 13:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:38.664 13:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:38.664 13:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:38.664 13:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:38.664 [2024-07-26 13:22:19.162402] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.664 13:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:38.664 13:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:38.664 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:38.923 [2024-07-26 13:22:19.370983] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:38.923 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:38.923 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:38.923 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:38.923 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:38.923 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:38.923 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:38.923 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:38.923 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:38.923 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:38.923 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:38.923 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.923 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.182 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:39.182 "name": "raid_bdev1", 00:22:39.182 "uuid": "af46832b-cb49-4a5c-8f16-29ee46ee054d", 00:22:39.182 "strip_size_kb": 0, 00:22:39.182 "state": "online", 00:22:39.182 "raid_level": "raid1", 00:22:39.182 "superblock": false, 00:22:39.182 "num_base_bdevs": 2, 00:22:39.182 "num_base_bdevs_discovered": 1, 00:22:39.182 "num_base_bdevs_operational": 1, 00:22:39.182 "base_bdevs_list": [ 00:22:39.182 { 00:22:39.182 "name": null, 00:22:39.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.182 "is_configured": false, 00:22:39.182 "data_offset": 0, 00:22:39.182 "data_size": 65536 00:22:39.182 }, 00:22:39.182 { 00:22:39.182 "name": "BaseBdev2", 00:22:39.182 "uuid": "e65d40f7-e604-5a72-a2f1-99c88a50b8ce", 00:22:39.182 "is_configured": true, 00:22:39.182 "data_offset": 0, 00:22:39.182 "data_size": 65536 00:22:39.182 } 00:22:39.182 ] 00:22:39.182 }' 00:22:39.182 13:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:39.182 13:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.748 13:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:40.007 [2024-07-26 13:22:20.437804] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:40.007 [2024-07-26 13:22:20.442529] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10b1290 00:22:40.007 [2024-07-26 13:22:20.444570] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:40.007 13:22:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:40.942 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:40.942 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:40.942 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:40.942 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:40.942 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:40.943 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.943 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.202 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:41.202 "name": "raid_bdev1", 00:22:41.202 "uuid": "af46832b-cb49-4a5c-8f16-29ee46ee054d", 00:22:41.202 "strip_size_kb": 0, 00:22:41.202 "state": "online", 00:22:41.202 "raid_level": "raid1", 00:22:41.202 "superblock": false, 00:22:41.202 "num_base_bdevs": 2, 00:22:41.202 "num_base_bdevs_discovered": 2, 00:22:41.202 "num_base_bdevs_operational": 2, 00:22:41.202 "process": { 00:22:41.202 "type": "rebuild", 00:22:41.202 "target": "spare", 00:22:41.202 "progress": { 00:22:41.202 "blocks": 22528, 00:22:41.202 "percent": 34 00:22:41.202 } 00:22:41.202 }, 00:22:41.202 "base_bdevs_list": [ 00:22:41.202 { 00:22:41.202 "name": "spare", 00:22:41.202 "uuid": "9166a23c-48f0-54fe-9008-3004c0bc85cb", 00:22:41.202 "is_configured": true, 00:22:41.202 "data_offset": 0, 00:22:41.202 "data_size": 65536 00:22:41.202 }, 00:22:41.202 { 00:22:41.202 "name": "BaseBdev2", 00:22:41.202 "uuid": "e65d40f7-e604-5a72-a2f1-99c88a50b8ce", 00:22:41.202 "is_configured": true, 00:22:41.202 "data_offset": 0, 00:22:41.202 "data_size": 65536 00:22:41.202 } 00:22:41.202 ] 00:22:41.202 }' 00:22:41.202 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:41.202 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:41.202 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:41.461 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:41.461 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:41.461 [2024-07-26 13:22:21.950652] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:41.461 [2024-07-26 13:22:21.955497] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:41.461 [2024-07-26 13:22:21.955541] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.461 [2024-07-26 13:22:21.955555] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:41.461 [2024-07-26 13:22:21.955563] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:41.461 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:41.461 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:41.461 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:41.461 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:41.461 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:41.461 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:22:41.461 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.461 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.461 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.461 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:41.720 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.720 13:22:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.720 13:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:41.720 "name": "raid_bdev1", 00:22:41.720 "uuid": "af46832b-cb49-4a5c-8f16-29ee46ee054d", 00:22:41.720 "strip_size_kb": 0, 00:22:41.720 "state": "online", 00:22:41.720 "raid_level": "raid1", 00:22:41.720 "superblock": false, 00:22:41.720 "num_base_bdevs": 2, 00:22:41.720 "num_base_bdevs_discovered": 1, 00:22:41.720 "num_base_bdevs_operational": 1, 00:22:41.720 "base_bdevs_list": [ 00:22:41.720 { 00:22:41.720 "name": null, 00:22:41.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:41.720 "is_configured": false, 00:22:41.720 "data_offset": 0, 00:22:41.720 "data_size": 65536 00:22:41.720 }, 00:22:41.720 { 00:22:41.720 "name": "BaseBdev2", 00:22:41.720 "uuid": "e65d40f7-e604-5a72-a2f1-99c88a50b8ce", 00:22:41.720 "is_configured": true, 00:22:41.720 "data_offset": 0, 00:22:41.720 "data_size": 65536 00:22:41.720 } 00:22:41.720 ] 00:22:41.720 }' 00:22:41.720 13:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:41.720 13:22:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.287 13:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:42.287 13:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:42.287 13:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:42.287 13:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:42.287 13:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:42.287 13:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.287 13:22:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.663 13:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:42.663 "name": "raid_bdev1", 00:22:42.663 "uuid": "af46832b-cb49-4a5c-8f16-29ee46ee054d", 00:22:42.663 "strip_size_kb": 0, 00:22:42.663 "state": "online", 00:22:42.663 "raid_level": "raid1", 00:22:42.663 "superblock": false, 00:22:42.663 "num_base_bdevs": 2, 00:22:42.663 "num_base_bdevs_discovered": 1, 00:22:42.663 "num_base_bdevs_operational": 1, 00:22:42.663 "base_bdevs_list": [ 00:22:42.663 { 00:22:42.663 "name": null, 00:22:42.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.663 "is_configured": false, 00:22:42.663 "data_offset": 0, 00:22:42.663 "data_size": 65536 00:22:42.663 }, 00:22:42.663 { 00:22:42.663 "name": "BaseBdev2", 00:22:42.663 "uuid": "e65d40f7-e604-5a72-a2f1-99c88a50b8ce", 00:22:42.663 "is_configured": true, 00:22:42.663 "data_offset": 0, 00:22:42.663 "data_size": 65536 00:22:42.663 } 00:22:42.663 ] 00:22:42.663 }' 00:22:42.663 13:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:42.663 13:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:42.663 13:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:42.663 13:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:42.663 13:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:42.922 [2024-07-26 13:22:23.323575] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:42.922 [2024-07-26 13:22:23.328286] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10bd3c0 00:22:42.922 [2024-07-26 13:22:23.329640] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:42.923 13:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:22:43.867 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.867 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:43.867 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:43.867 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:43.867 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:43.867 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.867 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:44.150 "name": "raid_bdev1", 00:22:44.150 "uuid": "af46832b-cb49-4a5c-8f16-29ee46ee054d", 00:22:44.150 "strip_size_kb": 0, 00:22:44.150 "state": "online", 00:22:44.150 "raid_level": "raid1", 00:22:44.150 "superblock": false, 00:22:44.150 "num_base_bdevs": 2, 00:22:44.150 "num_base_bdevs_discovered": 2, 00:22:44.150 "num_base_bdevs_operational": 2, 00:22:44.150 "process": { 00:22:44.150 "type": "rebuild", 00:22:44.150 "target": "spare", 00:22:44.150 "progress": { 00:22:44.150 "blocks": 24576, 00:22:44.150 "percent": 37 00:22:44.150 } 00:22:44.150 }, 00:22:44.150 "base_bdevs_list": [ 00:22:44.150 { 00:22:44.150 "name": "spare", 00:22:44.150 "uuid": "9166a23c-48f0-54fe-9008-3004c0bc85cb", 00:22:44.150 "is_configured": true, 00:22:44.150 "data_offset": 0, 00:22:44.150 "data_size": 65536 00:22:44.150 }, 00:22:44.150 { 00:22:44.150 "name": "BaseBdev2", 00:22:44.150 "uuid": "e65d40f7-e604-5a72-a2f1-99c88a50b8ce", 00:22:44.150 "is_configured": true, 00:22:44.150 "data_offset": 0, 00:22:44.150 "data_size": 65536 00:22:44.150 } 00:22:44.150 ] 00:22:44.150 }' 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=730 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.150 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.410 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:44.410 "name": "raid_bdev1", 00:22:44.410 "uuid": "af46832b-cb49-4a5c-8f16-29ee46ee054d", 00:22:44.410 "strip_size_kb": 0, 00:22:44.410 "state": "online", 00:22:44.410 "raid_level": "raid1", 00:22:44.410 "superblock": false, 00:22:44.410 "num_base_bdevs": 2, 00:22:44.410 "num_base_bdevs_discovered": 2, 00:22:44.410 "num_base_bdevs_operational": 2, 00:22:44.410 "process": { 00:22:44.410 "type": "rebuild", 00:22:44.410 "target": "spare", 00:22:44.410 "progress": { 00:22:44.410 "blocks": 28672, 00:22:44.410 "percent": 43 00:22:44.410 } 00:22:44.410 }, 00:22:44.410 "base_bdevs_list": [ 00:22:44.410 { 00:22:44.410 "name": "spare", 00:22:44.410 "uuid": "9166a23c-48f0-54fe-9008-3004c0bc85cb", 00:22:44.410 "is_configured": true, 00:22:44.410 "data_offset": 0, 00:22:44.410 "data_size": 65536 00:22:44.410 }, 00:22:44.410 { 00:22:44.410 "name": "BaseBdev2", 00:22:44.410 "uuid": "e65d40f7-e604-5a72-a2f1-99c88a50b8ce", 00:22:44.410 "is_configured": true, 00:22:44.410 "data_offset": 0, 00:22:44.410 "data_size": 65536 00:22:44.410 } 00:22:44.410 ] 00:22:44.410 }' 00:22:44.410 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:44.410 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:44.410 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:44.410 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:44.410 13:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:45.789 13:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:45.789 13:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.789 13:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:45.789 13:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:45.789 13:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:45.789 13:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:45.789 13:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.789 13:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.789 13:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:45.789 "name": "raid_bdev1", 00:22:45.789 "uuid": "af46832b-cb49-4a5c-8f16-29ee46ee054d", 00:22:45.789 "strip_size_kb": 0, 00:22:45.789 "state": "online", 00:22:45.789 "raid_level": "raid1", 00:22:45.789 "superblock": false, 00:22:45.789 "num_base_bdevs": 2, 00:22:45.789 "num_base_bdevs_discovered": 2, 00:22:45.789 "num_base_bdevs_operational": 2, 00:22:45.789 "process": { 00:22:45.789 "type": "rebuild", 00:22:45.789 "target": "spare", 00:22:45.789 "progress": { 00:22:45.789 "blocks": 55296, 00:22:45.789 "percent": 84 00:22:45.789 } 00:22:45.789 }, 00:22:45.789 "base_bdevs_list": [ 00:22:45.789 { 00:22:45.789 "name": "spare", 00:22:45.789 "uuid": "9166a23c-48f0-54fe-9008-3004c0bc85cb", 00:22:45.789 "is_configured": true, 00:22:45.789 "data_offset": 0, 00:22:45.789 "data_size": 65536 00:22:45.789 }, 00:22:45.789 { 00:22:45.789 "name": "BaseBdev2", 00:22:45.789 "uuid": "e65d40f7-e604-5a72-a2f1-99c88a50b8ce", 00:22:45.789 "is_configured": true, 00:22:45.789 "data_offset": 0, 00:22:45.789 "data_size": 65536 00:22:45.789 } 00:22:45.789 ] 00:22:45.789 }' 00:22:45.789 13:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:45.789 13:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.789 13:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:45.789 13:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.789 13:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:22:46.048 [2024-07-26 13:22:26.552761] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:46.048 [2024-07-26 13:22:26.552814] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:46.048 [2024-07-26 13:22:26.552851] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:46.986 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:22:46.986 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.986 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:46.986 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:22:46.986 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:22:46.986 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:46.986 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.986 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:46.986 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:46.986 "name": "raid_bdev1", 00:22:46.986 "uuid": "af46832b-cb49-4a5c-8f16-29ee46ee054d", 00:22:46.986 "strip_size_kb": 0, 00:22:46.986 "state": "online", 00:22:46.986 "raid_level": "raid1", 00:22:46.986 "superblock": false, 00:22:46.986 "num_base_bdevs": 2, 00:22:46.986 "num_base_bdevs_discovered": 2, 00:22:46.986 "num_base_bdevs_operational": 2, 00:22:46.986 "base_bdevs_list": [ 00:22:46.986 { 00:22:46.986 "name": "spare", 00:22:46.986 "uuid": "9166a23c-48f0-54fe-9008-3004c0bc85cb", 00:22:46.986 "is_configured": true, 00:22:46.986 "data_offset": 0, 00:22:46.986 "data_size": 65536 00:22:46.986 }, 00:22:46.986 { 00:22:46.986 "name": "BaseBdev2", 00:22:46.986 "uuid": "e65d40f7-e604-5a72-a2f1-99c88a50b8ce", 00:22:46.986 "is_configured": true, 00:22:46.986 "data_offset": 0, 00:22:46.986 "data_size": 65536 00:22:46.986 } 00:22:46.986 ] 00:22:46.986 }' 00:22:46.986 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:47.245 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:47.246 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:47.246 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:22:47.246 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:22:47.246 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:47.246 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:22:47.246 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:22:47.246 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:22:47.246 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:22:47.246 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.246 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:47.505 "name": "raid_bdev1", 00:22:47.505 "uuid": "af46832b-cb49-4a5c-8f16-29ee46ee054d", 00:22:47.505 "strip_size_kb": 0, 00:22:47.505 "state": "online", 00:22:47.505 "raid_level": "raid1", 00:22:47.505 "superblock": false, 00:22:47.505 "num_base_bdevs": 2, 00:22:47.505 "num_base_bdevs_discovered": 2, 00:22:47.505 "num_base_bdevs_operational": 2, 00:22:47.505 "base_bdevs_list": [ 00:22:47.505 { 00:22:47.505 "name": "spare", 00:22:47.505 "uuid": "9166a23c-48f0-54fe-9008-3004c0bc85cb", 00:22:47.505 "is_configured": true, 00:22:47.505 "data_offset": 0, 00:22:47.505 "data_size": 65536 00:22:47.505 }, 00:22:47.505 { 00:22:47.505 "name": "BaseBdev2", 00:22:47.505 "uuid": "e65d40f7-e604-5a72-a2f1-99c88a50b8ce", 00:22:47.505 "is_configured": true, 00:22:47.505 "data_offset": 0, 00:22:47.505 "data_size": 65536 00:22:47.505 } 00:22:47.505 ] 00:22:47.505 }' 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.505 13:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.764 13:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.764 "name": "raid_bdev1", 00:22:47.764 "uuid": "af46832b-cb49-4a5c-8f16-29ee46ee054d", 00:22:47.764 "strip_size_kb": 0, 00:22:47.764 "state": "online", 00:22:47.764 "raid_level": "raid1", 00:22:47.764 "superblock": false, 00:22:47.764 "num_base_bdevs": 2, 00:22:47.764 "num_base_bdevs_discovered": 2, 00:22:47.764 "num_base_bdevs_operational": 2, 00:22:47.764 "base_bdevs_list": [ 00:22:47.764 { 00:22:47.764 "name": "spare", 00:22:47.764 "uuid": "9166a23c-48f0-54fe-9008-3004c0bc85cb", 00:22:47.764 "is_configured": true, 00:22:47.764 "data_offset": 0, 00:22:47.764 "data_size": 65536 00:22:47.764 }, 00:22:47.764 { 00:22:47.764 "name": "BaseBdev2", 00:22:47.764 "uuid": "e65d40f7-e604-5a72-a2f1-99c88a50b8ce", 00:22:47.764 "is_configured": true, 00:22:47.764 "data_offset": 0, 00:22:47.764 "data_size": 65536 00:22:47.764 } 00:22:47.764 ] 00:22:47.764 }' 00:22:47.764 13:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.764 13:22:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.332 13:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:48.591 [2024-07-26 13:22:28.911000] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:48.591 [2024-07-26 13:22:28.911025] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:48.592 [2024-07-26 13:22:28.911081] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:48.592 [2024-07-26 13:22:28.911136] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:48.592 [2024-07-26 13:22:28.911153] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf11270 name raid_bdev1, state offline 00:22:48.592 13:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.592 13:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:48.851 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:49.110 /dev/nbd0 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:49.110 1+0 records in 00:22:49.110 1+0 records out 00:22:49.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244294 s, 16.8 MB/s 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:49.110 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:49.370 /dev/nbd1 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:49.370 1+0 records in 00:22:49.370 1+0 records out 00:22:49.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028563 s, 14.3 MB/s 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:49.370 13:22:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:49.629 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:49.629 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:49.629 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:49.629 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:49.629 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:49.629 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:49.629 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:49.629 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:49.629 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:49.629 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 780998 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 780998 ']' 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 780998 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 780998 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 780998' 00:22:49.889 killing process with pid 780998 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 780998 00:22:49.889 Received shutdown signal, test time was about 60.000000 seconds 00:22:49.889 00:22:49.889 Latency(us) 00:22:49.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.889 =================================================================================================================== 00:22:49.889 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:49.889 [2024-07-26 13:22:30.339733] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:49.889 13:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 780998 00:22:49.889 [2024-07-26 13:22:30.362598] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:50.148 13:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:22:50.148 00:22:50.148 real 0m20.057s 00:22:50.148 user 0m27.646s 00:22:50.148 sys 0m4.161s 00:22:50.148 13:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.148 13:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.148 ************************************ 00:22:50.148 END TEST raid_rebuild_test 00:22:50.148 ************************************ 00:22:50.149 13:22:30 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:22:50.149 13:22:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:50.149 13:22:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.149 13:22:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:50.149 ************************************ 00:22:50.149 START TEST raid_rebuild_test_sb 00:22:50.149 ************************************ 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=784584 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 784584 /var/tmp/spdk-raid.sock 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 784584 ']' 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:50.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.149 13:22:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:50.408 [2024-07-26 13:22:30.690242] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:22:50.408 [2024-07-26 13:22:30.690298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784584 ] 00:22:50.408 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:50.408 Zero copy mechanism will not be used. 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.408 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:50.408 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.409 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:50.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.409 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:50.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.409 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:50.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.409 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:50.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.409 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:50.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.409 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:50.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.409 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:50.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.409 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:50.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.409 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:50.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.409 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:50.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.409 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:50.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:50.409 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:50.409 [2024-07-26 13:22:30.822302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.409 [2024-07-26 13:22:30.909540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.668 [2024-07-26 13:22:30.975396] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:50.668 [2024-07-26 13:22:30.975430] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:51.606 13:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:51.606 13:22:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:22:51.606 13:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:51.606 13:22:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:51.606 BaseBdev1_malloc 00:22:51.606 13:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:51.865 [2024-07-26 13:22:32.281922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:51.865 [2024-07-26 13:22:32.281962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.865 [2024-07-26 13:22:32.281983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x166f5f0 00:22:51.865 [2024-07-26 13:22:32.281994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.865 [2024-07-26 13:22:32.283479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.865 [2024-07-26 13:22:32.283506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:51.865 BaseBdev1 00:22:51.865 13:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:22:51.865 13:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:52.433 BaseBdev2_malloc 00:22:52.433 13:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:52.693 [2024-07-26 13:22:33.024545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:52.693 [2024-07-26 13:22:33.024586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.693 [2024-07-26 13:22:33.024603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1813130 00:22:52.693 [2024-07-26 13:22:33.024615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.693 [2024-07-26 13:22:33.025966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.693 [2024-07-26 13:22:33.025994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:52.693 BaseBdev2 00:22:52.693 13:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:53.271 spare_malloc 00:22:53.271 13:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:53.530 spare_delay 00:22:53.790 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:53.790 [2024-07-26 13:22:34.284157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:53.790 [2024-07-26 13:22:34.284206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.790 [2024-07-26 13:22:34.284224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1812770 00:22:53.790 [2024-07-26 13:22:34.284236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.790 [2024-07-26 13:22:34.285657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.790 [2024-07-26 13:22:34.285682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:53.790 spare 00:22:53.790 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:22:54.049 [2024-07-26 13:22:34.512777] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:54.049 [2024-07-26 13:22:34.513945] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:54.049 [2024-07-26 13:22:34.514078] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1667270 00:22:54.049 [2024-07-26 13:22:34.514090] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:54.049 [2024-07-26 13:22:34.514285] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18133c0 00:22:54.049 [2024-07-26 13:22:34.514414] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1667270 00:22:54.049 [2024-07-26 13:22:34.514424] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1667270 00:22:54.049 [2024-07-26 13:22:34.514525] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.049 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:54.049 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:54.049 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:54.049 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:54.049 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:54.049 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:54.049 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:54.049 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:54.049 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:54.049 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:54.049 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.049 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.309 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:54.309 "name": "raid_bdev1", 00:22:54.309 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:22:54.309 "strip_size_kb": 0, 00:22:54.309 "state": "online", 00:22:54.309 "raid_level": "raid1", 00:22:54.309 "superblock": true, 00:22:54.309 "num_base_bdevs": 2, 00:22:54.309 "num_base_bdevs_discovered": 2, 00:22:54.309 "num_base_bdevs_operational": 2, 00:22:54.309 "base_bdevs_list": [ 00:22:54.309 { 00:22:54.309 "name": "BaseBdev1", 00:22:54.309 "uuid": "30035183-e206-5c8d-a6f9-52035c11b7c8", 00:22:54.309 "is_configured": true, 00:22:54.309 "data_offset": 2048, 00:22:54.309 "data_size": 63488 00:22:54.309 }, 00:22:54.309 { 00:22:54.309 "name": "BaseBdev2", 00:22:54.309 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:22:54.309 "is_configured": true, 00:22:54.309 "data_offset": 2048, 00:22:54.309 "data_size": 63488 00:22:54.309 } 00:22:54.309 ] 00:22:54.309 }' 00:22:54.309 13:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:54.309 13:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:54.875 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:54.875 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:22:55.134 [2024-07-26 13:22:35.519619] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:55.134 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:22:55.134 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.134 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.393 13:22:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:55.653 [2024-07-26 13:22:35.984668] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1807290 00:22:55.653 /dev/nbd0 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:55.653 1+0 records in 00:22:55.653 1+0 records out 00:22:55.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237923 s, 17.2 MB/s 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:22:55.653 13:22:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:22:59.862 63488+0 records in 00:22:59.862 63488+0 records out 00:22:59.862 32505856 bytes (33 MB, 31 MiB) copied, 4.02763 s, 8.1 MB/s 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:59.862 [2024-07-26 13:22:40.316804] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:59.862 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:00.122 [2024-07-26 13:22:40.541436] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:00.122 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:00.122 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:00.122 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:00.122 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:00.122 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:00.122 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:00.122 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.122 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.122 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.122 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.122 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.122 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.381 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.381 "name": "raid_bdev1", 00:23:00.381 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:00.381 "strip_size_kb": 0, 00:23:00.381 "state": "online", 00:23:00.381 "raid_level": "raid1", 00:23:00.381 "superblock": true, 00:23:00.381 "num_base_bdevs": 2, 00:23:00.381 "num_base_bdevs_discovered": 1, 00:23:00.381 "num_base_bdevs_operational": 1, 00:23:00.381 "base_bdevs_list": [ 00:23:00.381 { 00:23:00.381 "name": null, 00:23:00.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.381 "is_configured": false, 00:23:00.381 "data_offset": 2048, 00:23:00.381 "data_size": 63488 00:23:00.381 }, 00:23:00.381 { 00:23:00.381 "name": "BaseBdev2", 00:23:00.381 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:00.381 "is_configured": true, 00:23:00.381 "data_offset": 2048, 00:23:00.381 "data_size": 63488 00:23:00.381 } 00:23:00.381 ] 00:23:00.381 }' 00:23:00.381 13:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.381 13:22:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:00.949 13:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:01.209 [2024-07-26 13:22:41.568144] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:01.209 [2024-07-26 13:22:41.572895] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x180a2b0 00:23:01.209 [2024-07-26 13:22:41.574947] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:01.209 13:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:02.147 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:02.147 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:02.147 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:02.147 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:02.147 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:02.147 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.147 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.406 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:02.406 "name": "raid_bdev1", 00:23:02.406 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:02.406 "strip_size_kb": 0, 00:23:02.406 "state": "online", 00:23:02.406 "raid_level": "raid1", 00:23:02.406 "superblock": true, 00:23:02.406 "num_base_bdevs": 2, 00:23:02.406 "num_base_bdevs_discovered": 2, 00:23:02.406 "num_base_bdevs_operational": 2, 00:23:02.406 "process": { 00:23:02.406 "type": "rebuild", 00:23:02.406 "target": "spare", 00:23:02.406 "progress": { 00:23:02.406 "blocks": 24576, 00:23:02.406 "percent": 38 00:23:02.406 } 00:23:02.406 }, 00:23:02.406 "base_bdevs_list": [ 00:23:02.406 { 00:23:02.406 "name": "spare", 00:23:02.406 "uuid": "a76a13ac-6198-589e-bd27-dd765a06d225", 00:23:02.406 "is_configured": true, 00:23:02.406 "data_offset": 2048, 00:23:02.406 "data_size": 63488 00:23:02.406 }, 00:23:02.406 { 00:23:02.406 "name": "BaseBdev2", 00:23:02.406 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:02.406 "is_configured": true, 00:23:02.406 "data_offset": 2048, 00:23:02.406 "data_size": 63488 00:23:02.406 } 00:23:02.406 ] 00:23:02.406 }' 00:23:02.406 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:02.406 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:02.406 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:02.406 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:02.406 13:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:02.665 [2024-07-26 13:22:43.113246] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:02.665 [2024-07-26 13:22:43.186702] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:02.665 [2024-07-26 13:22:43.186747] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:02.665 [2024-07-26 13:22:43.186762] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:02.665 [2024-07-26 13:22:43.186770] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.923 "name": "raid_bdev1", 00:23:02.923 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:02.923 "strip_size_kb": 0, 00:23:02.923 "state": "online", 00:23:02.923 "raid_level": "raid1", 00:23:02.923 "superblock": true, 00:23:02.923 "num_base_bdevs": 2, 00:23:02.923 "num_base_bdevs_discovered": 1, 00:23:02.923 "num_base_bdevs_operational": 1, 00:23:02.923 "base_bdevs_list": [ 00:23:02.923 { 00:23:02.923 "name": null, 00:23:02.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.923 "is_configured": false, 00:23:02.923 "data_offset": 2048, 00:23:02.923 "data_size": 63488 00:23:02.923 }, 00:23:02.923 { 00:23:02.923 "name": "BaseBdev2", 00:23:02.923 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:02.923 "is_configured": true, 00:23:02.923 "data_offset": 2048, 00:23:02.923 "data_size": 63488 00:23:02.923 } 00:23:02.923 ] 00:23:02.923 }' 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.923 13:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.490 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:03.490 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:03.490 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:03.490 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:03.490 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:03.490 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.490 13:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.750 13:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:03.750 "name": "raid_bdev1", 00:23:03.750 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:03.750 "strip_size_kb": 0, 00:23:03.750 "state": "online", 00:23:03.750 "raid_level": "raid1", 00:23:03.750 "superblock": true, 00:23:03.750 "num_base_bdevs": 2, 00:23:03.750 "num_base_bdevs_discovered": 1, 00:23:03.750 "num_base_bdevs_operational": 1, 00:23:03.750 "base_bdevs_list": [ 00:23:03.750 { 00:23:03.750 "name": null, 00:23:03.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.750 "is_configured": false, 00:23:03.750 "data_offset": 2048, 00:23:03.750 "data_size": 63488 00:23:03.750 }, 00:23:03.750 { 00:23:03.750 "name": "BaseBdev2", 00:23:03.750 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:03.750 "is_configured": true, 00:23:03.750 "data_offset": 2048, 00:23:03.750 "data_size": 63488 00:23:03.750 } 00:23:03.750 ] 00:23:03.750 }' 00:23:03.750 13:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:03.750 13:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:03.750 13:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:04.008 13:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:04.008 13:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:04.008 [2024-07-26 13:22:44.494295] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:04.008 [2024-07-26 13:22:44.499086] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1807290 00:23:04.008 [2024-07-26 13:22:44.500452] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:04.008 13:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:23:05.386 13:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:05.386 13:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:05.386 13:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:05.386 13:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:05.386 13:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:05.386 13:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.386 13:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.644 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:05.644 "name": "raid_bdev1", 00:23:05.644 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:05.644 "strip_size_kb": 0, 00:23:05.644 "state": "online", 00:23:05.644 "raid_level": "raid1", 00:23:05.644 "superblock": true, 00:23:05.644 "num_base_bdevs": 2, 00:23:05.644 "num_base_bdevs_discovered": 2, 00:23:05.644 "num_base_bdevs_operational": 2, 00:23:05.644 "process": { 00:23:05.644 "type": "rebuild", 00:23:05.644 "target": "spare", 00:23:05.644 "progress": { 00:23:05.645 "blocks": 28672, 00:23:05.645 "percent": 45 00:23:05.645 } 00:23:05.645 }, 00:23:05.645 "base_bdevs_list": [ 00:23:05.645 { 00:23:05.645 "name": "spare", 00:23:05.645 "uuid": "a76a13ac-6198-589e-bd27-dd765a06d225", 00:23:05.645 "is_configured": true, 00:23:05.645 "data_offset": 2048, 00:23:05.645 "data_size": 63488 00:23:05.645 }, 00:23:05.645 { 00:23:05.645 "name": "BaseBdev2", 00:23:05.645 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:05.645 "is_configured": true, 00:23:05.645 "data_offset": 2048, 00:23:05.645 "data_size": 63488 00:23:05.645 } 00:23:05.645 ] 00:23:05.645 }' 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:23:05.645 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=752 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.645 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.904 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:05.904 "name": "raid_bdev1", 00:23:05.904 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:05.904 "strip_size_kb": 0, 00:23:05.904 "state": "online", 00:23:05.904 "raid_level": "raid1", 00:23:05.904 "superblock": true, 00:23:05.904 "num_base_bdevs": 2, 00:23:05.904 "num_base_bdevs_discovered": 2, 00:23:05.904 "num_base_bdevs_operational": 2, 00:23:05.904 "process": { 00:23:05.904 "type": "rebuild", 00:23:05.904 "target": "spare", 00:23:05.904 "progress": { 00:23:05.904 "blocks": 36864, 00:23:05.904 "percent": 58 00:23:05.904 } 00:23:05.904 }, 00:23:05.904 "base_bdevs_list": [ 00:23:05.904 { 00:23:05.904 "name": "spare", 00:23:05.904 "uuid": "a76a13ac-6198-589e-bd27-dd765a06d225", 00:23:05.904 "is_configured": true, 00:23:05.904 "data_offset": 2048, 00:23:05.904 "data_size": 63488 00:23:05.904 }, 00:23:05.904 { 00:23:05.904 "name": "BaseBdev2", 00:23:05.904 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:05.904 "is_configured": true, 00:23:05.904 "data_offset": 2048, 00:23:05.904 "data_size": 63488 00:23:05.904 } 00:23:05.904 ] 00:23:05.904 }' 00:23:05.904 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:05.904 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:05.904 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:06.162 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.162 13:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:07.101 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:07.101 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:07.101 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:07.101 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:07.101 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:07.101 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:07.101 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.101 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.101 [2024-07-26 13:22:47.622961] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:07.101 [2024-07-26 13:22:47.623015] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:07.101 [2024-07-26 13:22:47.623094] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:07.360 "name": "raid_bdev1", 00:23:07.360 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:07.360 "strip_size_kb": 0, 00:23:07.360 "state": "online", 00:23:07.360 "raid_level": "raid1", 00:23:07.360 "superblock": true, 00:23:07.360 "num_base_bdevs": 2, 00:23:07.360 "num_base_bdevs_discovered": 2, 00:23:07.360 "num_base_bdevs_operational": 2, 00:23:07.360 "base_bdevs_list": [ 00:23:07.360 { 00:23:07.360 "name": "spare", 00:23:07.360 "uuid": "a76a13ac-6198-589e-bd27-dd765a06d225", 00:23:07.360 "is_configured": true, 00:23:07.360 "data_offset": 2048, 00:23:07.360 "data_size": 63488 00:23:07.360 }, 00:23:07.360 { 00:23:07.360 "name": "BaseBdev2", 00:23:07.360 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:07.360 "is_configured": true, 00:23:07.360 "data_offset": 2048, 00:23:07.360 "data_size": 63488 00:23:07.360 } 00:23:07.360 ] 00:23:07.360 }' 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.360 13:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.619 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:07.619 "name": "raid_bdev1", 00:23:07.619 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:07.619 "strip_size_kb": 0, 00:23:07.619 "state": "online", 00:23:07.619 "raid_level": "raid1", 00:23:07.619 "superblock": true, 00:23:07.619 "num_base_bdevs": 2, 00:23:07.619 "num_base_bdevs_discovered": 2, 00:23:07.620 "num_base_bdevs_operational": 2, 00:23:07.620 "base_bdevs_list": [ 00:23:07.620 { 00:23:07.620 "name": "spare", 00:23:07.620 "uuid": "a76a13ac-6198-589e-bd27-dd765a06d225", 00:23:07.620 "is_configured": true, 00:23:07.620 "data_offset": 2048, 00:23:07.620 "data_size": 63488 00:23:07.620 }, 00:23:07.620 { 00:23:07.620 "name": "BaseBdev2", 00:23:07.620 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:07.620 "is_configured": true, 00:23:07.620 "data_offset": 2048, 00:23:07.620 "data_size": 63488 00:23:07.620 } 00:23:07.620 ] 00:23:07.620 }' 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.620 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.879 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.879 "name": "raid_bdev1", 00:23:07.879 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:07.879 "strip_size_kb": 0, 00:23:07.879 "state": "online", 00:23:07.879 "raid_level": "raid1", 00:23:07.879 "superblock": true, 00:23:07.879 "num_base_bdevs": 2, 00:23:07.879 "num_base_bdevs_discovered": 2, 00:23:07.879 "num_base_bdevs_operational": 2, 00:23:07.879 "base_bdevs_list": [ 00:23:07.879 { 00:23:07.879 "name": "spare", 00:23:07.879 "uuid": "a76a13ac-6198-589e-bd27-dd765a06d225", 00:23:07.879 "is_configured": true, 00:23:07.879 "data_offset": 2048, 00:23:07.879 "data_size": 63488 00:23:07.879 }, 00:23:07.879 { 00:23:07.879 "name": "BaseBdev2", 00:23:07.879 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:07.879 "is_configured": true, 00:23:07.879 "data_offset": 2048, 00:23:07.879 "data_size": 63488 00:23:07.879 } 00:23:07.879 ] 00:23:07.879 }' 00:23:07.879 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.879 13:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.447 13:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:08.706 [2024-07-26 13:22:49.087049] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:08.707 [2024-07-26 13:22:49.087074] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:08.707 [2024-07-26 13:22:49.087130] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:08.707 [2024-07-26 13:22:49.087196] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:08.707 [2024-07-26 13:22:49.087208] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1667270 name raid_bdev1, state offline 00:23:08.707 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.707 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:08.966 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:09.226 /dev/nbd0 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.226 1+0 records in 00:23:09.226 1+0 records out 00:23:09.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240015 s, 17.1 MB/s 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:09.226 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:09.486 /dev/nbd1 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:09.486 1+0 records in 00:23:09.486 1+0 records out 00:23:09.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240255 s, 17.0 MB/s 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:09.486 13:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:09.746 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:09.746 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:09.746 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:09.746 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.746 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.746 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:09.746 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:09.746 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.746 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:09.746 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:10.005 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:10.005 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:10.005 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:10.005 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:10.005 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:10.005 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:10.005 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:10.005 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:10.005 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:23:10.005 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:10.265 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:10.525 [2024-07-26 13:22:50.895653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:10.525 [2024-07-26 13:22:50.895697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.525 [2024-07-26 13:22:50.895716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16680a0 00:23:10.525 [2024-07-26 13:22:50.895728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.525 [2024-07-26 13:22:50.897239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.525 [2024-07-26 13:22:50.897266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:10.525 [2024-07-26 13:22:50.897337] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:10.525 [2024-07-26 13:22:50.897362] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:10.525 [2024-07-26 13:22:50.897457] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:10.525 spare 00:23:10.525 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:10.525 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:10.525 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:10.525 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:10.525 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:10.525 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:10.525 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:10.525 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:10.525 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:10.525 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:10.525 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.525 13:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.525 [2024-07-26 13:22:50.997769] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x166c650 00:23:10.525 [2024-07-26 13:22:50.997787] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:10.525 [2024-07-26 13:22:50.997958] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1807290 00:23:10.525 [2024-07-26 13:22:50.998099] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x166c650 00:23:10.525 [2024-07-26 13:22:50.998109] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x166c650 00:23:10.525 [2024-07-26 13:22:50.998214] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.785 13:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:10.785 "name": "raid_bdev1", 00:23:10.785 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:10.785 "strip_size_kb": 0, 00:23:10.785 "state": "online", 00:23:10.785 "raid_level": "raid1", 00:23:10.785 "superblock": true, 00:23:10.785 "num_base_bdevs": 2, 00:23:10.785 "num_base_bdevs_discovered": 2, 00:23:10.785 "num_base_bdevs_operational": 2, 00:23:10.785 "base_bdevs_list": [ 00:23:10.785 { 00:23:10.785 "name": "spare", 00:23:10.785 "uuid": "a76a13ac-6198-589e-bd27-dd765a06d225", 00:23:10.785 "is_configured": true, 00:23:10.785 "data_offset": 2048, 00:23:10.785 "data_size": 63488 00:23:10.785 }, 00:23:10.785 { 00:23:10.785 "name": "BaseBdev2", 00:23:10.785 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:10.785 "is_configured": true, 00:23:10.785 "data_offset": 2048, 00:23:10.785 "data_size": 63488 00:23:10.785 } 00:23:10.785 ] 00:23:10.785 }' 00:23:10.785 13:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:10.785 13:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.353 13:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:11.353 13:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:11.353 13:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:11.354 13:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:11.354 13:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:11.354 13:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.354 13:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.612 13:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:11.612 "name": "raid_bdev1", 00:23:11.613 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:11.613 "strip_size_kb": 0, 00:23:11.613 "state": "online", 00:23:11.613 "raid_level": "raid1", 00:23:11.613 "superblock": true, 00:23:11.613 "num_base_bdevs": 2, 00:23:11.613 "num_base_bdevs_discovered": 2, 00:23:11.613 "num_base_bdevs_operational": 2, 00:23:11.613 "base_bdevs_list": [ 00:23:11.613 { 00:23:11.613 "name": "spare", 00:23:11.613 "uuid": "a76a13ac-6198-589e-bd27-dd765a06d225", 00:23:11.613 "is_configured": true, 00:23:11.613 "data_offset": 2048, 00:23:11.613 "data_size": 63488 00:23:11.613 }, 00:23:11.613 { 00:23:11.613 "name": "BaseBdev2", 00:23:11.613 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:11.613 "is_configured": true, 00:23:11.613 "data_offset": 2048, 00:23:11.613 "data_size": 63488 00:23:11.613 } 00:23:11.613 ] 00:23:11.613 }' 00:23:11.613 13:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:11.613 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:11.613 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:11.613 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:11.613 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:11.613 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.872 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:23:11.872 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:12.129 [2024-07-26 13:22:52.431887] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:12.129 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:12.129 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:12.129 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:12.129 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:12.129 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:12.129 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:12.129 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:12.129 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:12.129 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:12.129 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:12.129 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.129 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.387 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:12.387 "name": "raid_bdev1", 00:23:12.387 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:12.387 "strip_size_kb": 0, 00:23:12.387 "state": "online", 00:23:12.387 "raid_level": "raid1", 00:23:12.387 "superblock": true, 00:23:12.387 "num_base_bdevs": 2, 00:23:12.387 "num_base_bdevs_discovered": 1, 00:23:12.387 "num_base_bdevs_operational": 1, 00:23:12.387 "base_bdevs_list": [ 00:23:12.387 { 00:23:12.387 "name": null, 00:23:12.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.387 "is_configured": false, 00:23:12.387 "data_offset": 2048, 00:23:12.387 "data_size": 63488 00:23:12.387 }, 00:23:12.387 { 00:23:12.387 "name": "BaseBdev2", 00:23:12.387 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:12.387 "is_configured": true, 00:23:12.387 "data_offset": 2048, 00:23:12.387 "data_size": 63488 00:23:12.387 } 00:23:12.387 ] 00:23:12.387 }' 00:23:12.387 13:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:12.387 13:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.013 13:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:13.013 [2024-07-26 13:22:53.494691] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:13.013 [2024-07-26 13:22:53.494825] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:13.013 [2024-07-26 13:22:53.494843] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:13.013 [2024-07-26 13:22:53.494869] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:13.013 [2024-07-26 13:22:53.499549] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1668f10 00:23:13.013 [2024-07-26 13:22:53.501659] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:13.013 13:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:14.391 "name": "raid_bdev1", 00:23:14.391 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:14.391 "strip_size_kb": 0, 00:23:14.391 "state": "online", 00:23:14.391 "raid_level": "raid1", 00:23:14.391 "superblock": true, 00:23:14.391 "num_base_bdevs": 2, 00:23:14.391 "num_base_bdevs_discovered": 2, 00:23:14.391 "num_base_bdevs_operational": 2, 00:23:14.391 "process": { 00:23:14.391 "type": "rebuild", 00:23:14.391 "target": "spare", 00:23:14.391 "progress": { 00:23:14.391 "blocks": 24576, 00:23:14.391 "percent": 38 00:23:14.391 } 00:23:14.391 }, 00:23:14.391 "base_bdevs_list": [ 00:23:14.391 { 00:23:14.391 "name": "spare", 00:23:14.391 "uuid": "a76a13ac-6198-589e-bd27-dd765a06d225", 00:23:14.391 "is_configured": true, 00:23:14.391 "data_offset": 2048, 00:23:14.391 "data_size": 63488 00:23:14.391 }, 00:23:14.391 { 00:23:14.391 "name": "BaseBdev2", 00:23:14.391 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:14.391 "is_configured": true, 00:23:14.391 "data_offset": 2048, 00:23:14.391 "data_size": 63488 00:23:14.391 } 00:23:14.391 ] 00:23:14.391 }' 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:14.391 13:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:14.650 [2024-07-26 13:22:55.051885] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:14.650 [2024-07-26 13:22:55.113250] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:14.650 [2024-07-26 13:22:55.113292] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.650 [2024-07-26 13:22:55.113306] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:14.650 [2024-07-26 13:22:55.113314] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:14.650 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:14.650 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:14.650 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:14.650 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:14.650 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:14.650 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:14.650 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:14.650 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:14.650 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:14.650 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:14.650 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.650 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.909 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:14.909 "name": "raid_bdev1", 00:23:14.909 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:14.909 "strip_size_kb": 0, 00:23:14.909 "state": "online", 00:23:14.909 "raid_level": "raid1", 00:23:14.909 "superblock": true, 00:23:14.909 "num_base_bdevs": 2, 00:23:14.909 "num_base_bdevs_discovered": 1, 00:23:14.909 "num_base_bdevs_operational": 1, 00:23:14.909 "base_bdevs_list": [ 00:23:14.909 { 00:23:14.909 "name": null, 00:23:14.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.909 "is_configured": false, 00:23:14.909 "data_offset": 2048, 00:23:14.909 "data_size": 63488 00:23:14.909 }, 00:23:14.909 { 00:23:14.909 "name": "BaseBdev2", 00:23:14.909 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:14.909 "is_configured": true, 00:23:14.909 "data_offset": 2048, 00:23:14.909 "data_size": 63488 00:23:14.909 } 00:23:14.909 ] 00:23:14.909 }' 00:23:14.909 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:14.909 13:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.477 13:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:15.736 [2024-07-26 13:22:56.139928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:15.736 [2024-07-26 13:22:56.139971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.736 [2024-07-26 13:22:56.139990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x166c9e0 00:23:15.736 [2024-07-26 13:22:56.140001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.736 [2024-07-26 13:22:56.140338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.736 [2024-07-26 13:22:56.140355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:15.736 [2024-07-26 13:22:56.140431] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:15.736 [2024-07-26 13:22:56.140443] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:15.736 [2024-07-26 13:22:56.140454] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:15.736 [2024-07-26 13:22:56.140471] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:15.736 [2024-07-26 13:22:56.145100] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1807290 00:23:15.736 spare 00:23:15.736 [2024-07-26 13:22:56.146450] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:15.736 13:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:23:16.673 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:16.673 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:16.673 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:16.673 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:16.673 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:16.673 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.673 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.931 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:16.931 "name": "raid_bdev1", 00:23:16.931 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:16.931 "strip_size_kb": 0, 00:23:16.931 "state": "online", 00:23:16.931 "raid_level": "raid1", 00:23:16.931 "superblock": true, 00:23:16.931 "num_base_bdevs": 2, 00:23:16.931 "num_base_bdevs_discovered": 2, 00:23:16.931 "num_base_bdevs_operational": 2, 00:23:16.931 "process": { 00:23:16.931 "type": "rebuild", 00:23:16.931 "target": "spare", 00:23:16.931 "progress": { 00:23:16.931 "blocks": 24576, 00:23:16.931 "percent": 38 00:23:16.931 } 00:23:16.931 }, 00:23:16.931 "base_bdevs_list": [ 00:23:16.931 { 00:23:16.931 "name": "spare", 00:23:16.931 "uuid": "a76a13ac-6198-589e-bd27-dd765a06d225", 00:23:16.931 "is_configured": true, 00:23:16.931 "data_offset": 2048, 00:23:16.931 "data_size": 63488 00:23:16.931 }, 00:23:16.931 { 00:23:16.931 "name": "BaseBdev2", 00:23:16.931 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:16.931 "is_configured": true, 00:23:16.931 "data_offset": 2048, 00:23:16.931 "data_size": 63488 00:23:16.931 } 00:23:16.931 ] 00:23:16.931 }' 00:23:16.931 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:16.931 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:16.931 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:17.190 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.190 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:17.190 [2024-07-26 13:22:57.681884] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:17.449 [2024-07-26 13:22:57.758052] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:17.449 [2024-07-26 13:22:57.758095] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:17.449 [2024-07-26 13:22:57.758110] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:17.449 [2024-07-26 13:22:57.758118] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:17.449 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:17.449 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:17.449 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:17.449 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:17.449 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:17.449 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:17.449 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:17.449 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:17.449 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:17.449 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:17.449 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.449 13:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.708 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:17.708 "name": "raid_bdev1", 00:23:17.708 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:17.708 "strip_size_kb": 0, 00:23:17.708 "state": "online", 00:23:17.708 "raid_level": "raid1", 00:23:17.708 "superblock": true, 00:23:17.708 "num_base_bdevs": 2, 00:23:17.708 "num_base_bdevs_discovered": 1, 00:23:17.708 "num_base_bdevs_operational": 1, 00:23:17.708 "base_bdevs_list": [ 00:23:17.708 { 00:23:17.708 "name": null, 00:23:17.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.708 "is_configured": false, 00:23:17.708 "data_offset": 2048, 00:23:17.708 "data_size": 63488 00:23:17.708 }, 00:23:17.708 { 00:23:17.708 "name": "BaseBdev2", 00:23:17.708 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:17.708 "is_configured": true, 00:23:17.708 "data_offset": 2048, 00:23:17.708 "data_size": 63488 00:23:17.708 } 00:23:17.708 ] 00:23:17.708 }' 00:23:17.708 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:17.708 13:22:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.276 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:18.276 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:18.276 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:18.276 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:18.276 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:18.276 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.276 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.535 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:18.535 "name": "raid_bdev1", 00:23:18.535 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:18.535 "strip_size_kb": 0, 00:23:18.535 "state": "online", 00:23:18.535 "raid_level": "raid1", 00:23:18.535 "superblock": true, 00:23:18.535 "num_base_bdevs": 2, 00:23:18.535 "num_base_bdevs_discovered": 1, 00:23:18.535 "num_base_bdevs_operational": 1, 00:23:18.535 "base_bdevs_list": [ 00:23:18.535 { 00:23:18.535 "name": null, 00:23:18.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.535 "is_configured": false, 00:23:18.535 "data_offset": 2048, 00:23:18.535 "data_size": 63488 00:23:18.535 }, 00:23:18.535 { 00:23:18.535 "name": "BaseBdev2", 00:23:18.535 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:18.535 "is_configured": true, 00:23:18.535 "data_offset": 2048, 00:23:18.535 "data_size": 63488 00:23:18.535 } 00:23:18.535 ] 00:23:18.535 }' 00:23:18.535 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:18.535 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:18.535 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:18.535 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:18.535 13:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:18.794 13:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:19.053 [2024-07-26 13:22:59.346330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:19.053 [2024-07-26 13:22:59.346371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.053 [2024-07-26 13:22:59.346388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16682d0 00:23:19.053 [2024-07-26 13:22:59.346400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.053 [2024-07-26 13:22:59.346706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.053 [2024-07-26 13:22:59.346722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:19.053 [2024-07-26 13:22:59.346777] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:19.053 [2024-07-26 13:22:59.346788] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:19.053 [2024-07-26 13:22:59.346798] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:19.053 BaseBdev1 00:23:19.053 13:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:23:19.989 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:19.989 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:19.989 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:19.989 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:19.989 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:19.989 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:19.989 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.989 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.990 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.990 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.990 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.990 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.248 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:20.248 "name": "raid_bdev1", 00:23:20.249 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:20.249 "strip_size_kb": 0, 00:23:20.249 "state": "online", 00:23:20.249 "raid_level": "raid1", 00:23:20.249 "superblock": true, 00:23:20.249 "num_base_bdevs": 2, 00:23:20.249 "num_base_bdevs_discovered": 1, 00:23:20.249 "num_base_bdevs_operational": 1, 00:23:20.249 "base_bdevs_list": [ 00:23:20.249 { 00:23:20.249 "name": null, 00:23:20.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.249 "is_configured": false, 00:23:20.249 "data_offset": 2048, 00:23:20.249 "data_size": 63488 00:23:20.249 }, 00:23:20.249 { 00:23:20.249 "name": "BaseBdev2", 00:23:20.249 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:20.249 "is_configured": true, 00:23:20.249 "data_offset": 2048, 00:23:20.249 "data_size": 63488 00:23:20.249 } 00:23:20.249 ] 00:23:20.249 }' 00:23:20.249 13:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:20.249 13:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.816 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:20.816 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:20.816 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:20.816 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:20.816 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:20.816 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.816 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:21.075 "name": "raid_bdev1", 00:23:21.075 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:21.075 "strip_size_kb": 0, 00:23:21.075 "state": "online", 00:23:21.075 "raid_level": "raid1", 00:23:21.075 "superblock": true, 00:23:21.075 "num_base_bdevs": 2, 00:23:21.075 "num_base_bdevs_discovered": 1, 00:23:21.075 "num_base_bdevs_operational": 1, 00:23:21.075 "base_bdevs_list": [ 00:23:21.075 { 00:23:21.075 "name": null, 00:23:21.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.075 "is_configured": false, 00:23:21.075 "data_offset": 2048, 00:23:21.075 "data_size": 63488 00:23:21.075 }, 00:23:21.075 { 00:23:21.075 "name": "BaseBdev2", 00:23:21.075 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:21.075 "is_configured": true, 00:23:21.075 "data_offset": 2048, 00:23:21.075 "data_size": 63488 00:23:21.075 } 00:23:21.075 ] 00:23:21.075 }' 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:21.075 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:21.334 [2024-07-26 13:23:01.676474] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.334 [2024-07-26 13:23:01.676582] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:21.334 [2024-07-26 13:23:01.676596] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:21.334 request: 00:23:21.334 { 00:23:21.334 "base_bdev": "BaseBdev1", 00:23:21.334 "raid_bdev": "raid_bdev1", 00:23:21.334 "method": "bdev_raid_add_base_bdev", 00:23:21.334 "req_id": 1 00:23:21.334 } 00:23:21.334 Got JSON-RPC error response 00:23:21.334 response: 00:23:21.334 { 00:23:21.334 "code": -22, 00:23:21.334 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:21.334 } 00:23:21.334 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:23:21.334 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:21.334 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:21.334 13:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:21.334 13:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:23:22.270 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:22.270 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:22.270 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:22.270 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:22.270 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:22.270 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:22.270 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.270 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.270 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.270 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.270 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.270 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.529 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.529 "name": "raid_bdev1", 00:23:22.529 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:22.529 "strip_size_kb": 0, 00:23:22.529 "state": "online", 00:23:22.529 "raid_level": "raid1", 00:23:22.529 "superblock": true, 00:23:22.529 "num_base_bdevs": 2, 00:23:22.529 "num_base_bdevs_discovered": 1, 00:23:22.529 "num_base_bdevs_operational": 1, 00:23:22.529 "base_bdevs_list": [ 00:23:22.529 { 00:23:22.529 "name": null, 00:23:22.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.529 "is_configured": false, 00:23:22.529 "data_offset": 2048, 00:23:22.529 "data_size": 63488 00:23:22.529 }, 00:23:22.529 { 00:23:22.529 "name": "BaseBdev2", 00:23:22.529 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:22.529 "is_configured": true, 00:23:22.529 "data_offset": 2048, 00:23:22.529 "data_size": 63488 00:23:22.529 } 00:23:22.529 ] 00:23:22.529 }' 00:23:22.529 13:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.529 13:23:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.095 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:23.095 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:23.095 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:23.095 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:23.095 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:23.095 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.095 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:23.354 "name": "raid_bdev1", 00:23:23.354 "uuid": "88aa8dc9-b363-419f-b115-6e11dbafe421", 00:23:23.354 "strip_size_kb": 0, 00:23:23.354 "state": "online", 00:23:23.354 "raid_level": "raid1", 00:23:23.354 "superblock": true, 00:23:23.354 "num_base_bdevs": 2, 00:23:23.354 "num_base_bdevs_discovered": 1, 00:23:23.354 "num_base_bdevs_operational": 1, 00:23:23.354 "base_bdevs_list": [ 00:23:23.354 { 00:23:23.354 "name": null, 00:23:23.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.354 "is_configured": false, 00:23:23.354 "data_offset": 2048, 00:23:23.354 "data_size": 63488 00:23:23.354 }, 00:23:23.354 { 00:23:23.354 "name": "BaseBdev2", 00:23:23.354 "uuid": "3ccc27e2-6be9-5591-b99c-e9a2fd764d8f", 00:23:23.354 "is_configured": true, 00:23:23.354 "data_offset": 2048, 00:23:23.354 "data_size": 63488 00:23:23.354 } 00:23:23.354 ] 00:23:23.354 }' 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 784584 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 784584 ']' 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 784584 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 784584 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 784584' 00:23:23.354 killing process with pid 784584 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 784584 00:23:23.354 Received shutdown signal, test time was about 60.000000 seconds 00:23:23.354 00:23:23.354 Latency(us) 00:23:23.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.354 =================================================================================================================== 00:23:23.354 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:23.354 [2024-07-26 13:23:03.877034] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:23.354 [2024-07-26 13:23:03.877111] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.354 [2024-07-26 13:23:03.877157] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.354 [2024-07-26 13:23:03.877169] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x166c650 name raid_bdev1, state offline 00:23:23.354 13:23:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 784584 00:23:23.612 [2024-07-26 13:23:03.900427] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:23.612 13:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:23:23.612 00:23:23.612 real 0m33.460s 00:23:23.612 user 0m49.883s 00:23:23.612 sys 0m6.156s 00:23:23.612 13:23:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:23.612 13:23:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.612 ************************************ 00:23:23.612 END TEST raid_rebuild_test_sb 00:23:23.612 ************************************ 00:23:23.612 13:23:04 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:23:23.612 13:23:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:23:23.612 13:23:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:23.612 13:23:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:23.871 ************************************ 00:23:23.871 START TEST raid_rebuild_test_io 00:23:23.871 ************************************ 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=790594 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 790594 /var/tmp/spdk-raid.sock 00:23:23.871 13:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:23.872 13:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 790594 ']' 00:23:23.872 13:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:23.872 13:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:23.872 13:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:23.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:23.872 13:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:23.872 13:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:23.872 [2024-07-26 13:23:04.244630] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:23:23.872 [2024-07-26 13:23:04.244688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790594 ] 00:23:23.872 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:23.872 Zero copy mechanism will not be used. 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:23.872 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:23.872 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:23.872 [2024-07-26 13:23:04.375876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.131 [2024-07-26 13:23:04.463690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.131 [2024-07-26 13:23:04.527168] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:24.131 [2024-07-26 13:23:04.527204] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:24.698 13:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.698 13:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:23:24.698 13:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:24.698 13:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:24.957 BaseBdev1_malloc 00:23:24.957 13:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:25.216 [2024-07-26 13:23:05.569093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:25.216 [2024-07-26 13:23:05.569135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.216 [2024-07-26 13:23:05.569162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15515f0 00:23:25.216 [2024-07-26 13:23:05.569174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.216 [2024-07-26 13:23:05.570684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.216 [2024-07-26 13:23:05.570714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:25.216 BaseBdev1 00:23:25.216 13:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:25.216 13:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:25.475 BaseBdev2_malloc 00:23:25.475 13:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:25.734 [2024-07-26 13:23:06.026707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:25.734 [2024-07-26 13:23:06.026751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.734 [2024-07-26 13:23:06.026768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16f5130 00:23:25.734 [2024-07-26 13:23:06.026780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.734 [2024-07-26 13:23:06.028208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.734 [2024-07-26 13:23:06.028234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:25.734 BaseBdev2 00:23:25.734 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:25.993 spare_malloc 00:23:25.993 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:25.993 spare_delay 00:23:25.993 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:26.252 [2024-07-26 13:23:06.712888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:26.252 [2024-07-26 13:23:06.712931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.252 [2024-07-26 13:23:06.712949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16f4770 00:23:26.252 [2024-07-26 13:23:06.712961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.252 [2024-07-26 13:23:06.714357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.252 [2024-07-26 13:23:06.714384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:26.252 spare 00:23:26.252 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:26.524 [2024-07-26 13:23:06.937506] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:26.524 [2024-07-26 13:23:06.938669] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:26.524 [2024-07-26 13:23:06.938739] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1549270 00:23:26.524 [2024-07-26 13:23:06.938749] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:23:26.524 [2024-07-26 13:23:06.938944] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16f53c0 00:23:26.524 [2024-07-26 13:23:06.939070] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1549270 00:23:26.524 [2024-07-26 13:23:06.939080] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1549270 00:23:26.524 [2024-07-26 13:23:06.939192] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.524 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:26.524 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:26.524 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:26.524 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:26.524 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:26.524 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:26.524 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:26.524 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:26.524 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:26.524 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:26.524 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.524 13:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.802 13:23:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.802 "name": "raid_bdev1", 00:23:26.802 "uuid": "b36e1110-f360-4a8b-b573-9be754ac3f24", 00:23:26.802 "strip_size_kb": 0, 00:23:26.802 "state": "online", 00:23:26.802 "raid_level": "raid1", 00:23:26.802 "superblock": false, 00:23:26.802 "num_base_bdevs": 2, 00:23:26.802 "num_base_bdevs_discovered": 2, 00:23:26.802 "num_base_bdevs_operational": 2, 00:23:26.802 "base_bdevs_list": [ 00:23:26.802 { 00:23:26.802 "name": "BaseBdev1", 00:23:26.802 "uuid": "206e14ba-e6a0-5754-8274-21882c213fe8", 00:23:26.802 "is_configured": true, 00:23:26.802 "data_offset": 0, 00:23:26.802 "data_size": 65536 00:23:26.802 }, 00:23:26.802 { 00:23:26.802 "name": "BaseBdev2", 00:23:26.802 "uuid": "58c60405-b3cd-5adb-98f2-c73c29435d18", 00:23:26.802 "is_configured": true, 00:23:26.802 "data_offset": 0, 00:23:26.802 "data_size": 65536 00:23:26.802 } 00:23:26.802 ] 00:23:26.802 }' 00:23:26.802 13:23:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.802 13:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:27.370 13:23:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:27.370 13:23:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:23:27.629 [2024-07-26 13:23:07.972608] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:27.629 13:23:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:23:27.629 13:23:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.629 13:23:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:27.887 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:23:27.887 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:23:27.887 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:27.887 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:27.887 [2024-07-26 13:23:08.323269] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1548200 00:23:27.887 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:27.887 Zero copy mechanism will not be used. 00:23:27.887 Running I/O for 60 seconds... 00:23:28.147 [2024-07-26 13:23:08.430099] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:28.147 [2024-07-26 13:23:08.430278] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1548200 00:23:28.147 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:28.147 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:28.147 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:28.147 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:28.147 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:28.147 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:28.147 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:28.147 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:28.147 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:28.147 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:28.147 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.147 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.406 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:28.406 "name": "raid_bdev1", 00:23:28.406 "uuid": "b36e1110-f360-4a8b-b573-9be754ac3f24", 00:23:28.406 "strip_size_kb": 0, 00:23:28.406 "state": "online", 00:23:28.406 "raid_level": "raid1", 00:23:28.406 "superblock": false, 00:23:28.406 "num_base_bdevs": 2, 00:23:28.406 "num_base_bdevs_discovered": 1, 00:23:28.406 "num_base_bdevs_operational": 1, 00:23:28.406 "base_bdevs_list": [ 00:23:28.406 { 00:23:28.406 "name": null, 00:23:28.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.406 "is_configured": false, 00:23:28.406 "data_offset": 0, 00:23:28.406 "data_size": 65536 00:23:28.406 }, 00:23:28.406 { 00:23:28.406 "name": "BaseBdev2", 00:23:28.406 "uuid": "58c60405-b3cd-5adb-98f2-c73c29435d18", 00:23:28.406 "is_configured": true, 00:23:28.406 "data_offset": 0, 00:23:28.406 "data_size": 65536 00:23:28.406 } 00:23:28.406 ] 00:23:28.406 }' 00:23:28.406 13:23:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:28.406 13:23:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:28.973 13:23:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:28.973 [2024-07-26 13:23:09.498640] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:29.232 13:23:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:29.232 [2024-07-26 13:23:09.560255] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1550ef0 00:23:29.232 [2024-07-26 13:23:09.562478] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:29.232 [2024-07-26 13:23:09.671605] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:29.232 [2024-07-26 13:23:09.671976] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:29.491 [2024-07-26 13:23:09.897815] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:29.491 [2024-07-26 13:23:09.897986] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:29.750 [2024-07-26 13:23:10.241253] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:29.750 [2024-07-26 13:23:10.241646] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:30.009 [2024-07-26 13:23:10.475251] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:30.268 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:30.268 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:30.268 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:30.268 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:30.268 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:30.268 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.268 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.268 [2024-07-26 13:23:10.787443] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:30.268 [2024-07-26 13:23:10.787699] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:30.527 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:30.527 "name": "raid_bdev1", 00:23:30.527 "uuid": "b36e1110-f360-4a8b-b573-9be754ac3f24", 00:23:30.527 "strip_size_kb": 0, 00:23:30.527 "state": "online", 00:23:30.527 "raid_level": "raid1", 00:23:30.527 "superblock": false, 00:23:30.527 "num_base_bdevs": 2, 00:23:30.527 "num_base_bdevs_discovered": 2, 00:23:30.527 "num_base_bdevs_operational": 2, 00:23:30.527 "process": { 00:23:30.527 "type": "rebuild", 00:23:30.527 "target": "spare", 00:23:30.527 "progress": { 00:23:30.527 "blocks": 12288, 00:23:30.527 "percent": 18 00:23:30.527 } 00:23:30.527 }, 00:23:30.527 "base_bdevs_list": [ 00:23:30.527 { 00:23:30.527 "name": "spare", 00:23:30.527 "uuid": "06b74010-f356-5600-9445-78335b4fec9b", 00:23:30.527 "is_configured": true, 00:23:30.527 "data_offset": 0, 00:23:30.527 "data_size": 65536 00:23:30.527 }, 00:23:30.527 { 00:23:30.527 "name": "BaseBdev2", 00:23:30.527 "uuid": "58c60405-b3cd-5adb-98f2-c73c29435d18", 00:23:30.527 "is_configured": true, 00:23:30.527 "data_offset": 0, 00:23:30.527 "data_size": 65536 00:23:30.527 } 00:23:30.527 ] 00:23:30.527 }' 00:23:30.527 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:30.527 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:30.527 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:30.527 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:30.527 13:23:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:30.786 [2024-07-26 13:23:11.114249] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:30.786 [2024-07-26 13:23:11.271715] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:30.786 [2024-07-26 13:23:11.280897] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.786 [2024-07-26 13:23:11.280923] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:30.786 [2024-07-26 13:23:11.280933] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:30.786 [2024-07-26 13:23:11.308738] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1548200 00:23:31.045 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:31.045 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:31.045 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:31.045 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:31.045 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:31.045 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:31.045 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.045 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.045 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.045 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.045 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.045 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.303 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.303 "name": "raid_bdev1", 00:23:31.303 "uuid": "b36e1110-f360-4a8b-b573-9be754ac3f24", 00:23:31.303 "strip_size_kb": 0, 00:23:31.303 "state": "online", 00:23:31.303 "raid_level": "raid1", 00:23:31.303 "superblock": false, 00:23:31.303 "num_base_bdevs": 2, 00:23:31.303 "num_base_bdevs_discovered": 1, 00:23:31.303 "num_base_bdevs_operational": 1, 00:23:31.303 "base_bdevs_list": [ 00:23:31.303 { 00:23:31.303 "name": null, 00:23:31.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.303 "is_configured": false, 00:23:31.303 "data_offset": 0, 00:23:31.303 "data_size": 65536 00:23:31.303 }, 00:23:31.303 { 00:23:31.303 "name": "BaseBdev2", 00:23:31.303 "uuid": "58c60405-b3cd-5adb-98f2-c73c29435d18", 00:23:31.303 "is_configured": true, 00:23:31.303 "data_offset": 0, 00:23:31.303 "data_size": 65536 00:23:31.303 } 00:23:31.303 ] 00:23:31.303 }' 00:23:31.303 13:23:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.303 13:23:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:31.870 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:31.870 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:31.870 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:31.870 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:31.870 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:31.870 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.870 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.129 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:32.129 "name": "raid_bdev1", 00:23:32.129 "uuid": "b36e1110-f360-4a8b-b573-9be754ac3f24", 00:23:32.129 "strip_size_kb": 0, 00:23:32.129 "state": "online", 00:23:32.129 "raid_level": "raid1", 00:23:32.129 "superblock": false, 00:23:32.129 "num_base_bdevs": 2, 00:23:32.129 "num_base_bdevs_discovered": 1, 00:23:32.129 "num_base_bdevs_operational": 1, 00:23:32.129 "base_bdevs_list": [ 00:23:32.129 { 00:23:32.129 "name": null, 00:23:32.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.129 "is_configured": false, 00:23:32.129 "data_offset": 0, 00:23:32.129 "data_size": 65536 00:23:32.129 }, 00:23:32.129 { 00:23:32.129 "name": "BaseBdev2", 00:23:32.129 "uuid": "58c60405-b3cd-5adb-98f2-c73c29435d18", 00:23:32.129 "is_configured": true, 00:23:32.129 "data_offset": 0, 00:23:32.129 "data_size": 65536 00:23:32.129 } 00:23:32.129 ] 00:23:32.129 }' 00:23:32.129 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:32.129 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:32.129 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:32.129 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:32.129 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:32.388 [2024-07-26 13:23:12.721922] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:32.388 [2024-07-26 13:23:12.768493] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x124fb40 00:23:32.388 [2024-07-26 13:23:12.769936] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:32.388 13:23:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:23:32.388 [2024-07-26 13:23:12.894132] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:32.388 [2024-07-26 13:23:12.894380] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:32.646 [2024-07-26 13:23:13.112351] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:32.646 [2024-07-26 13:23:13.112557] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:33.212 [2024-07-26 13:23:13.456135] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:33.212 [2024-07-26 13:23:13.456454] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:33.212 [2024-07-26 13:23:13.688739] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:33.212 [2024-07-26 13:23:13.688861] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:33.471 13:23:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.471 13:23:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:33.471 13:23:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:33.471 13:23:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:33.471 13:23:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:33.471 13:23:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.471 13:23:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:33.823 "name": "raid_bdev1", 00:23:33.823 "uuid": "b36e1110-f360-4a8b-b573-9be754ac3f24", 00:23:33.823 "strip_size_kb": 0, 00:23:33.823 "state": "online", 00:23:33.823 "raid_level": "raid1", 00:23:33.823 "superblock": false, 00:23:33.823 "num_base_bdevs": 2, 00:23:33.823 "num_base_bdevs_discovered": 2, 00:23:33.823 "num_base_bdevs_operational": 2, 00:23:33.823 "process": { 00:23:33.823 "type": "rebuild", 00:23:33.823 "target": "spare", 00:23:33.823 "progress": { 00:23:33.823 "blocks": 12288, 00:23:33.823 "percent": 18 00:23:33.823 } 00:23:33.823 }, 00:23:33.823 "base_bdevs_list": [ 00:23:33.823 { 00:23:33.823 "name": "spare", 00:23:33.823 "uuid": "06b74010-f356-5600-9445-78335b4fec9b", 00:23:33.823 "is_configured": true, 00:23:33.823 "data_offset": 0, 00:23:33.823 "data_size": 65536 00:23:33.823 }, 00:23:33.823 { 00:23:33.823 "name": "BaseBdev2", 00:23:33.823 "uuid": "58c60405-b3cd-5adb-98f2-c73c29435d18", 00:23:33.823 "is_configured": true, 00:23:33.823 "data_offset": 0, 00:23:33.823 "data_size": 65536 00:23:33.823 } 00:23:33.823 ] 00:23:33.823 }' 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:33.823 [2024-07-26 13:23:14.042247] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=780 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.823 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.824 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:33.824 "name": "raid_bdev1", 00:23:33.824 "uuid": "b36e1110-f360-4a8b-b573-9be754ac3f24", 00:23:33.824 "strip_size_kb": 0, 00:23:33.824 "state": "online", 00:23:33.824 "raid_level": "raid1", 00:23:33.824 "superblock": false, 00:23:33.824 "num_base_bdevs": 2, 00:23:33.824 "num_base_bdevs_discovered": 2, 00:23:33.824 "num_base_bdevs_operational": 2, 00:23:33.824 "process": { 00:23:33.824 "type": "rebuild", 00:23:33.824 "target": "spare", 00:23:33.824 "progress": { 00:23:33.824 "blocks": 18432, 00:23:33.824 "percent": 28 00:23:33.824 } 00:23:33.824 }, 00:23:33.824 "base_bdevs_list": [ 00:23:33.824 { 00:23:33.824 "name": "spare", 00:23:33.824 "uuid": "06b74010-f356-5600-9445-78335b4fec9b", 00:23:33.824 "is_configured": true, 00:23:33.824 "data_offset": 0, 00:23:33.824 "data_size": 65536 00:23:33.824 }, 00:23:33.824 { 00:23:33.824 "name": "BaseBdev2", 00:23:33.824 "uuid": "58c60405-b3cd-5adb-98f2-c73c29435d18", 00:23:33.824 "is_configured": true, 00:23:33.824 "data_offset": 0, 00:23:33.824 "data_size": 65536 00:23:33.824 } 00:23:33.824 ] 00:23:33.824 }' 00:23:33.824 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:34.082 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:34.082 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:34.082 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:34.082 13:23:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:34.082 [2024-07-26 13:23:14.507196] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:35.020 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:35.020 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.020 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:35.020 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:35.020 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:35.020 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:35.020 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.020 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.020 [2024-07-26 13:23:15.497095] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:23:35.280 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:35.280 "name": "raid_bdev1", 00:23:35.280 "uuid": "b36e1110-f360-4a8b-b573-9be754ac3f24", 00:23:35.280 "strip_size_kb": 0, 00:23:35.280 "state": "online", 00:23:35.280 "raid_level": "raid1", 00:23:35.280 "superblock": false, 00:23:35.280 "num_base_bdevs": 2, 00:23:35.280 "num_base_bdevs_discovered": 2, 00:23:35.280 "num_base_bdevs_operational": 2, 00:23:35.280 "process": { 00:23:35.280 "type": "rebuild", 00:23:35.280 "target": "spare", 00:23:35.280 "progress": { 00:23:35.280 "blocks": 40960, 00:23:35.280 "percent": 62 00:23:35.280 } 00:23:35.280 }, 00:23:35.280 "base_bdevs_list": [ 00:23:35.280 { 00:23:35.280 "name": "spare", 00:23:35.280 "uuid": "06b74010-f356-5600-9445-78335b4fec9b", 00:23:35.280 "is_configured": true, 00:23:35.280 "data_offset": 0, 00:23:35.280 "data_size": 65536 00:23:35.280 }, 00:23:35.280 { 00:23:35.280 "name": "BaseBdev2", 00:23:35.280 "uuid": "58c60405-b3cd-5adb-98f2-c73c29435d18", 00:23:35.280 "is_configured": true, 00:23:35.280 "data_offset": 0, 00:23:35.280 "data_size": 65536 00:23:35.280 } 00:23:35.280 ] 00:23:35.280 }' 00:23:35.280 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:35.280 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:35.280 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:35.280 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:35.280 13:23:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:35.539 [2024-07-26 13:23:15.941238] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:35.798 [2024-07-26 13:23:16.158634] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:35.798 [2024-07-26 13:23:16.275797] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:36.056 [2024-07-26 13:23:16.580192] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:23:36.315 [2024-07-26 13:23:16.681478] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:23:36.315 13:23:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:36.315 13:23:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.315 13:23:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:36.315 13:23:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:36.315 13:23:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:36.315 13:23:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:36.315 13:23:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.315 13:23:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.574 13:23:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:36.574 "name": "raid_bdev1", 00:23:36.574 "uuid": "b36e1110-f360-4a8b-b573-9be754ac3f24", 00:23:36.574 "strip_size_kb": 0, 00:23:36.574 "state": "online", 00:23:36.574 "raid_level": "raid1", 00:23:36.574 "superblock": false, 00:23:36.574 "num_base_bdevs": 2, 00:23:36.574 "num_base_bdevs_discovered": 2, 00:23:36.574 "num_base_bdevs_operational": 2, 00:23:36.574 "process": { 00:23:36.574 "type": "rebuild", 00:23:36.574 "target": "spare", 00:23:36.574 "progress": { 00:23:36.574 "blocks": 63488, 00:23:36.574 "percent": 96 00:23:36.574 } 00:23:36.574 }, 00:23:36.574 "base_bdevs_list": [ 00:23:36.574 { 00:23:36.574 "name": "spare", 00:23:36.574 "uuid": "06b74010-f356-5600-9445-78335b4fec9b", 00:23:36.574 "is_configured": true, 00:23:36.574 "data_offset": 0, 00:23:36.574 "data_size": 65536 00:23:36.574 }, 00:23:36.574 { 00:23:36.574 "name": "BaseBdev2", 00:23:36.574 "uuid": "58c60405-b3cd-5adb-98f2-c73c29435d18", 00:23:36.574 "is_configured": true, 00:23:36.575 "data_offset": 0, 00:23:36.575 "data_size": 65536 00:23:36.575 } 00:23:36.575 ] 00:23:36.575 }' 00:23:36.575 13:23:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:36.575 [2024-07-26 13:23:16.985014] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:36.575 13:23:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.575 13:23:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:36.575 [2024-07-26 13:23:17.031123] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:36.575 [2024-07-26 13:23:17.033256] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.575 13:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.575 13:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:37.954 "name": "raid_bdev1", 00:23:37.954 "uuid": "b36e1110-f360-4a8b-b573-9be754ac3f24", 00:23:37.954 "strip_size_kb": 0, 00:23:37.954 "state": "online", 00:23:37.954 "raid_level": "raid1", 00:23:37.954 "superblock": false, 00:23:37.954 "num_base_bdevs": 2, 00:23:37.954 "num_base_bdevs_discovered": 2, 00:23:37.954 "num_base_bdevs_operational": 2, 00:23:37.954 "base_bdevs_list": [ 00:23:37.954 { 00:23:37.954 "name": "spare", 00:23:37.954 "uuid": "06b74010-f356-5600-9445-78335b4fec9b", 00:23:37.954 "is_configured": true, 00:23:37.954 "data_offset": 0, 00:23:37.954 "data_size": 65536 00:23:37.954 }, 00:23:37.954 { 00:23:37.954 "name": "BaseBdev2", 00:23:37.954 "uuid": "58c60405-b3cd-5adb-98f2-c73c29435d18", 00:23:37.954 "is_configured": true, 00:23:37.954 "data_offset": 0, 00:23:37.954 "data_size": 65536 00:23:37.954 } 00:23:37.954 ] 00:23:37.954 }' 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.954 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:38.213 "name": "raid_bdev1", 00:23:38.213 "uuid": "b36e1110-f360-4a8b-b573-9be754ac3f24", 00:23:38.213 "strip_size_kb": 0, 00:23:38.213 "state": "online", 00:23:38.213 "raid_level": "raid1", 00:23:38.213 "superblock": false, 00:23:38.213 "num_base_bdevs": 2, 00:23:38.213 "num_base_bdevs_discovered": 2, 00:23:38.213 "num_base_bdevs_operational": 2, 00:23:38.213 "base_bdevs_list": [ 00:23:38.213 { 00:23:38.213 "name": "spare", 00:23:38.213 "uuid": "06b74010-f356-5600-9445-78335b4fec9b", 00:23:38.213 "is_configured": true, 00:23:38.213 "data_offset": 0, 00:23:38.213 "data_size": 65536 00:23:38.213 }, 00:23:38.213 { 00:23:38.213 "name": "BaseBdev2", 00:23:38.213 "uuid": "58c60405-b3cd-5adb-98f2-c73c29435d18", 00:23:38.213 "is_configured": true, 00:23:38.213 "data_offset": 0, 00:23:38.213 "data_size": 65536 00:23:38.213 } 00:23:38.213 ] 00:23:38.213 }' 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.213 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.472 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.472 "name": "raid_bdev1", 00:23:38.472 "uuid": "b36e1110-f360-4a8b-b573-9be754ac3f24", 00:23:38.472 "strip_size_kb": 0, 00:23:38.472 "state": "online", 00:23:38.472 "raid_level": "raid1", 00:23:38.472 "superblock": false, 00:23:38.472 "num_base_bdevs": 2, 00:23:38.472 "num_base_bdevs_discovered": 2, 00:23:38.472 "num_base_bdevs_operational": 2, 00:23:38.472 "base_bdevs_list": [ 00:23:38.472 { 00:23:38.472 "name": "spare", 00:23:38.472 "uuid": "06b74010-f356-5600-9445-78335b4fec9b", 00:23:38.472 "is_configured": true, 00:23:38.472 "data_offset": 0, 00:23:38.472 "data_size": 65536 00:23:38.472 }, 00:23:38.472 { 00:23:38.472 "name": "BaseBdev2", 00:23:38.472 "uuid": "58c60405-b3cd-5adb-98f2-c73c29435d18", 00:23:38.472 "is_configured": true, 00:23:38.472 "data_offset": 0, 00:23:38.472 "data_size": 65536 00:23:38.472 } 00:23:38.472 ] 00:23:38.472 }' 00:23:38.472 13:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.472 13:23:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:39.040 13:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:39.299 [2024-07-26 13:23:19.669785] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:39.299 [2024-07-26 13:23:19.669815] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:39.299 00:23:39.299 Latency(us) 00:23:39.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.299 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:39.299 raid_bdev1 : 11.40 97.77 293.30 0.00 0.00 14022.04 276.89 117440.51 00:23:39.299 =================================================================================================================== 00:23:39.299 Total : 97.77 293.30 0.00 0.00 14022.04 276.89 117440.51 00:23:39.299 [2024-07-26 13:23:19.761722] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.299 [2024-07-26 13:23:19.761748] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:39.299 [2024-07-26 13:23:19.761816] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:39.299 [2024-07-26 13:23:19.761828] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1549270 name raid_bdev1, state offline 00:23:39.299 0 00:23:39.299 13:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.299 13:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:39.558 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:39.818 /dev/nbd0 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:39.818 1+0 records in 00:23:39.818 1+0 records out 00:23:39.818 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235332 s, 17.4 MB/s 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:39.818 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:40.110 /dev/nbd1 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:40.110 1+0 records in 00:23:40.110 1+0 records out 00:23:40.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275802 s, 14.9 MB/s 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:40.110 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:40.369 13:23:20 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:40.628 13:23:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:40.628 13:23:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:40.628 13:23:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:40.628 13:23:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:40.628 13:23:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:40.629 13:23:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:40.629 13:23:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:23:40.629 13:23:21 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:40.629 13:23:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:23:40.629 13:23:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 790594 00:23:40.629 13:23:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 790594 ']' 00:23:40.629 13:23:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 790594 00:23:40.629 13:23:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:23:40.629 13:23:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:40.629 13:23:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 790594 00:23:40.888 13:23:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:40.888 13:23:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:40.888 13:23:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 790594' 00:23:40.888 killing process with pid 790594 00:23:40.888 13:23:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 790594 00:23:40.888 Received shutdown signal, test time was about 12.840708 seconds 00:23:40.888 00:23:40.888 Latency(us) 00:23:40.888 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.888 =================================================================================================================== 00:23:40.888 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.888 [2024-07-26 13:23:21.197384] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:40.888 13:23:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 790594 00:23:40.888 [2024-07-26 13:23:21.215389] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:40.888 13:23:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:23:40.888 00:23:40.888 real 0m17.230s 00:23:40.888 user 0m25.990s 00:23:40.888 sys 0m2.692s 00:23:40.888 13:23:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:40.888 13:23:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:23:40.888 ************************************ 00:23:40.888 END TEST raid_rebuild_test_io 00:23:40.888 ************************************ 00:23:41.148 13:23:21 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:23:41.148 13:23:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:23:41.148 13:23:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:41.148 13:23:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:41.148 ************************************ 00:23:41.148 START TEST raid_rebuild_test_sb_io 00:23:41.148 ************************************ 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=793737 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 793737 /var/tmp/spdk-raid.sock 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 793737 ']' 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:41.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:41.148 13:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:41.148 [2024-07-26 13:23:21.551795] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:23:41.148 [2024-07-26 13:23:21.551849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793737 ] 00:23:41.148 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:41.148 Zero copy mechanism will not be used. 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:41.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:41.148 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:41.408 [2024-07-26 13:23:21.682824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.408 [2024-07-26 13:23:21.770321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.408 [2024-07-26 13:23:21.839478] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.408 [2024-07-26 13:23:21.839517] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:41.977 13:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:41.977 13:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:23:41.977 13:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:41.977 13:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:42.236 BaseBdev1_malloc 00:23:42.236 13:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:42.494 [2024-07-26 13:23:22.877659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:42.494 [2024-07-26 13:23:22.877701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.494 [2024-07-26 13:23:22.877723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17025f0 00:23:42.494 [2024-07-26 13:23:22.877734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.494 [2024-07-26 13:23:22.879259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.494 [2024-07-26 13:23:22.879286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:42.494 BaseBdev1 00:23:42.494 13:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:23:42.494 13:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:42.753 BaseBdev2_malloc 00:23:42.753 13:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:43.012 [2024-07-26 13:23:23.323243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:43.012 [2024-07-26 13:23:23.323281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.012 [2024-07-26 13:23:23.323298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18a6130 00:23:43.012 [2024-07-26 13:23:23.323309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.012 [2024-07-26 13:23:23.324713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.012 [2024-07-26 13:23:23.324740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:43.012 BaseBdev2 00:23:43.012 13:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:43.271 spare_malloc 00:23:43.271 13:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:43.271 spare_delay 00:23:43.271 13:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:43.530 [2024-07-26 13:23:23.985194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:43.530 [2024-07-26 13:23:23.985236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.530 [2024-07-26 13:23:23.985254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18a5770 00:23:43.530 [2024-07-26 13:23:23.985265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.530 [2024-07-26 13:23:23.986689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.530 [2024-07-26 13:23:23.986716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:43.530 spare 00:23:43.530 13:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:23:44.099 [2024-07-26 13:23:24.486515] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:44.099 [2024-07-26 13:23:24.487711] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:44.099 [2024-07-26 13:23:24.487845] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x16fa270 00:23:44.099 [2024-07-26 13:23:24.487857] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:44.099 [2024-07-26 13:23:24.488040] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18a63c0 00:23:44.099 [2024-07-26 13:23:24.488174] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16fa270 00:23:44.099 [2024-07-26 13:23:24.488184] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x16fa270 00:23:44.099 [2024-07-26 13:23:24.488286] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.099 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:44.099 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:44.099 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:44.099 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:44.099 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:44.099 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:44.099 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:44.099 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:44.099 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:44.099 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:44.099 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.099 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.359 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:44.359 "name": "raid_bdev1", 00:23:44.359 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:44.359 "strip_size_kb": 0, 00:23:44.359 "state": "online", 00:23:44.359 "raid_level": "raid1", 00:23:44.359 "superblock": true, 00:23:44.359 "num_base_bdevs": 2, 00:23:44.359 "num_base_bdevs_discovered": 2, 00:23:44.359 "num_base_bdevs_operational": 2, 00:23:44.359 "base_bdevs_list": [ 00:23:44.359 { 00:23:44.359 "name": "BaseBdev1", 00:23:44.359 "uuid": "e48d8196-bef9-5570-ace6-2296a7d0a69c", 00:23:44.359 "is_configured": true, 00:23:44.359 "data_offset": 2048, 00:23:44.359 "data_size": 63488 00:23:44.359 }, 00:23:44.359 { 00:23:44.359 "name": "BaseBdev2", 00:23:44.359 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:44.359 "is_configured": true, 00:23:44.359 "data_offset": 2048, 00:23:44.359 "data_size": 63488 00:23:44.359 } 00:23:44.359 ] 00:23:44.359 }' 00:23:44.359 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:44.359 13:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:44.927 13:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:44.927 13:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:23:45.186 [2024-07-26 13:23:25.537476] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:45.186 13:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:23:45.186 13:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.186 13:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:45.445 13:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:23:45.445 13:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:23:45.445 13:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:45.445 13:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:45.445 [2024-07-26 13:23:25.880126] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18a63c0 00:23:45.445 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:45.445 Zero copy mechanism will not be used. 00:23:45.445 Running I/O for 60 seconds... 00:23:45.704 [2024-07-26 13:23:26.002344] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:45.704 [2024-07-26 13:23:26.002536] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x18a63c0 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.704 "name": "raid_bdev1", 00:23:45.704 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:45.704 "strip_size_kb": 0, 00:23:45.704 "state": "online", 00:23:45.704 "raid_level": "raid1", 00:23:45.704 "superblock": true, 00:23:45.704 "num_base_bdevs": 2, 00:23:45.704 "num_base_bdevs_discovered": 1, 00:23:45.704 "num_base_bdevs_operational": 1, 00:23:45.704 "base_bdevs_list": [ 00:23:45.704 { 00:23:45.704 "name": null, 00:23:45.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:45.704 "is_configured": false, 00:23:45.704 "data_offset": 2048, 00:23:45.704 "data_size": 63488 00:23:45.704 }, 00:23:45.704 { 00:23:45.704 "name": "BaseBdev2", 00:23:45.704 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:45.704 "is_configured": true, 00:23:45.704 "data_offset": 2048, 00:23:45.704 "data_size": 63488 00:23:45.704 } 00:23:45.704 ] 00:23:45.704 }' 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.704 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:46.271 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:46.530 [2024-07-26 13:23:26.929794] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:46.530 [2024-07-26 13:23:26.977045] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16f9700 00:23:46.530 [2024-07-26 13:23:26.979221] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:46.530 13:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:46.789 [2024-07-26 13:23:27.088684] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:46.789 [2024-07-26 13:23:27.088959] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:46.789 [2024-07-26 13:23:27.293636] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:46.789 [2024-07-26 13:23:27.293776] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:47.048 [2024-07-26 13:23:27.521312] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:47.048 [2024-07-26 13:23:27.521547] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:47.307 [2024-07-26 13:23:27.623034] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:47.307 [2024-07-26 13:23:27.623160] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:47.566 13:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:47.566 13:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:47.566 13:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:47.566 13:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:47.566 13:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:47.566 13:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.566 13:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.566 [2024-07-26 13:23:27.999526] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:47.566 [2024-07-26 13:23:27.999727] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:47.825 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:47.825 "name": "raid_bdev1", 00:23:47.825 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:47.825 "strip_size_kb": 0, 00:23:47.825 "state": "online", 00:23:47.825 "raid_level": "raid1", 00:23:47.825 "superblock": true, 00:23:47.825 "num_base_bdevs": 2, 00:23:47.825 "num_base_bdevs_discovered": 2, 00:23:47.825 "num_base_bdevs_operational": 2, 00:23:47.825 "process": { 00:23:47.825 "type": "rebuild", 00:23:47.825 "target": "spare", 00:23:47.825 "progress": { 00:23:47.825 "blocks": 16384, 00:23:47.825 "percent": 25 00:23:47.825 } 00:23:47.825 }, 00:23:47.825 "base_bdevs_list": [ 00:23:47.825 { 00:23:47.825 "name": "spare", 00:23:47.825 "uuid": "2ba02ba8-6f8e-5aab-8550-9d8ff9e45678", 00:23:47.825 "is_configured": true, 00:23:47.825 "data_offset": 2048, 00:23:47.825 "data_size": 63488 00:23:47.825 }, 00:23:47.825 { 00:23:47.825 "name": "BaseBdev2", 00:23:47.825 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:47.825 "is_configured": true, 00:23:47.825 "data_offset": 2048, 00:23:47.825 "data_size": 63488 00:23:47.825 } 00:23:47.825 ] 00:23:47.825 }' 00:23:47.825 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:47.825 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:47.825 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:47.825 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.825 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:48.085 [2024-07-26 13:23:28.516995] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:48.344 [2024-07-26 13:23:28.707250] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:48.344 [2024-07-26 13:23:28.716148] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.344 [2024-07-26 13:23:28.716187] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:48.344 [2024-07-26 13:23:28.716196] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:48.344 [2024-07-26 13:23:28.736344] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x18a63c0 00:23:48.344 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:48.344 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:48.344 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:48.344 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:48.344 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:48.344 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:23:48.344 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:48.344 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:48.344 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:48.344 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:48.344 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.344 13:23:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.603 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.603 "name": "raid_bdev1", 00:23:48.603 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:48.603 "strip_size_kb": 0, 00:23:48.603 "state": "online", 00:23:48.603 "raid_level": "raid1", 00:23:48.603 "superblock": true, 00:23:48.603 "num_base_bdevs": 2, 00:23:48.603 "num_base_bdevs_discovered": 1, 00:23:48.603 "num_base_bdevs_operational": 1, 00:23:48.603 "base_bdevs_list": [ 00:23:48.603 { 00:23:48.603 "name": null, 00:23:48.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.603 "is_configured": false, 00:23:48.603 "data_offset": 2048, 00:23:48.603 "data_size": 63488 00:23:48.603 }, 00:23:48.603 { 00:23:48.603 "name": "BaseBdev2", 00:23:48.603 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:48.603 "is_configured": true, 00:23:48.603 "data_offset": 2048, 00:23:48.603 "data_size": 63488 00:23:48.603 } 00:23:48.603 ] 00:23:48.603 }' 00:23:48.603 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.603 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:49.170 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:49.170 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:49.170 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:49.170 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:49.171 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:49.171 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.171 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.429 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:49.429 "name": "raid_bdev1", 00:23:49.429 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:49.429 "strip_size_kb": 0, 00:23:49.429 "state": "online", 00:23:49.429 "raid_level": "raid1", 00:23:49.429 "superblock": true, 00:23:49.429 "num_base_bdevs": 2, 00:23:49.429 "num_base_bdevs_discovered": 1, 00:23:49.429 "num_base_bdevs_operational": 1, 00:23:49.429 "base_bdevs_list": [ 00:23:49.429 { 00:23:49.429 "name": null, 00:23:49.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:49.429 "is_configured": false, 00:23:49.429 "data_offset": 2048, 00:23:49.429 "data_size": 63488 00:23:49.429 }, 00:23:49.429 { 00:23:49.429 "name": "BaseBdev2", 00:23:49.429 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:49.429 "is_configured": true, 00:23:49.429 "data_offset": 2048, 00:23:49.429 "data_size": 63488 00:23:49.429 } 00:23:49.429 ] 00:23:49.429 }' 00:23:49.429 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:49.429 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:49.429 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:49.429 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:49.429 13:23:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:49.688 [2024-07-26 13:23:30.158788] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:49.688 [2024-07-26 13:23:30.212466] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16f9700 00:23:49.688 [2024-07-26 13:23:30.213851] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:49.946 13:23:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:23:49.946 [2024-07-26 13:23:30.330783] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:49.946 [2024-07-26 13:23:30.331069] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:50.205 [2024-07-26 13:23:30.548867] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:50.205 [2024-07-26 13:23:30.548987] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:50.773 [2024-07-26 13:23:31.023959] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:50.773 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:50.773 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:50.773 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:50.773 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:50.773 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:50.773 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.773 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.031 [2024-07-26 13:23:31.359021] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:51.032 [2024-07-26 13:23:31.359316] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:51.032 "name": "raid_bdev1", 00:23:51.032 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:51.032 "strip_size_kb": 0, 00:23:51.032 "state": "online", 00:23:51.032 "raid_level": "raid1", 00:23:51.032 "superblock": true, 00:23:51.032 "num_base_bdevs": 2, 00:23:51.032 "num_base_bdevs_discovered": 2, 00:23:51.032 "num_base_bdevs_operational": 2, 00:23:51.032 "process": { 00:23:51.032 "type": "rebuild", 00:23:51.032 "target": "spare", 00:23:51.032 "progress": { 00:23:51.032 "blocks": 14336, 00:23:51.032 "percent": 22 00:23:51.032 } 00:23:51.032 }, 00:23:51.032 "base_bdevs_list": [ 00:23:51.032 { 00:23:51.032 "name": "spare", 00:23:51.032 "uuid": "2ba02ba8-6f8e-5aab-8550-9d8ff9e45678", 00:23:51.032 "is_configured": true, 00:23:51.032 "data_offset": 2048, 00:23:51.032 "data_size": 63488 00:23:51.032 }, 00:23:51.032 { 00:23:51.032 "name": "BaseBdev2", 00:23:51.032 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:51.032 "is_configured": true, 00:23:51.032 "data_offset": 2048, 00:23:51.032 "data_size": 63488 00:23:51.032 } 00:23:51.032 ] 00:23:51.032 }' 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:23:51.032 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=797 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.032 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.291 [2024-07-26 13:23:31.562195] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:51.291 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:51.291 "name": "raid_bdev1", 00:23:51.291 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:51.291 "strip_size_kb": 0, 00:23:51.291 "state": "online", 00:23:51.291 "raid_level": "raid1", 00:23:51.291 "superblock": true, 00:23:51.291 "num_base_bdevs": 2, 00:23:51.291 "num_base_bdevs_discovered": 2, 00:23:51.291 "num_base_bdevs_operational": 2, 00:23:51.291 "process": { 00:23:51.291 "type": "rebuild", 00:23:51.291 "target": "spare", 00:23:51.291 "progress": { 00:23:51.291 "blocks": 18432, 00:23:51.291 "percent": 29 00:23:51.291 } 00:23:51.291 }, 00:23:51.291 "base_bdevs_list": [ 00:23:51.291 { 00:23:51.291 "name": "spare", 00:23:51.291 "uuid": "2ba02ba8-6f8e-5aab-8550-9d8ff9e45678", 00:23:51.291 "is_configured": true, 00:23:51.291 "data_offset": 2048, 00:23:51.291 "data_size": 63488 00:23:51.291 }, 00:23:51.291 { 00:23:51.291 "name": "BaseBdev2", 00:23:51.291 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:51.291 "is_configured": true, 00:23:51.291 "data_offset": 2048, 00:23:51.291 "data_size": 63488 00:23:51.291 } 00:23:51.291 ] 00:23:51.291 }' 00:23:51.291 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:51.291 [2024-07-26 13:23:31.798155] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:51.291 [2024-07-26 13:23:31.798540] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:51.291 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.291 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:51.550 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.550 13:23:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:51.550 [2024-07-26 13:23:32.009412] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:52.117 [2024-07-26 13:23:32.359927] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:52.117 [2024-07-26 13:23:32.572330] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:52.377 13:23:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:52.377 13:23:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:52.377 13:23:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:52.377 13:23:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:52.377 13:23:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:52.377 13:23:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:52.377 13:23:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.377 13:23:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.636 13:23:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:52.636 "name": "raid_bdev1", 00:23:52.636 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:52.636 "strip_size_kb": 0, 00:23:52.636 "state": "online", 00:23:52.636 "raid_level": "raid1", 00:23:52.636 "superblock": true, 00:23:52.636 "num_base_bdevs": 2, 00:23:52.636 "num_base_bdevs_discovered": 2, 00:23:52.636 "num_base_bdevs_operational": 2, 00:23:52.636 "process": { 00:23:52.636 "type": "rebuild", 00:23:52.636 "target": "spare", 00:23:52.636 "progress": { 00:23:52.636 "blocks": 38912, 00:23:52.636 "percent": 61 00:23:52.636 } 00:23:52.636 }, 00:23:52.636 "base_bdevs_list": [ 00:23:52.636 { 00:23:52.636 "name": "spare", 00:23:52.636 "uuid": "2ba02ba8-6f8e-5aab-8550-9d8ff9e45678", 00:23:52.636 "is_configured": true, 00:23:52.636 "data_offset": 2048, 00:23:52.636 "data_size": 63488 00:23:52.636 }, 00:23:52.636 { 00:23:52.636 "name": "BaseBdev2", 00:23:52.636 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:52.636 "is_configured": true, 00:23:52.636 "data_offset": 2048, 00:23:52.637 "data_size": 63488 00:23:52.637 } 00:23:52.637 ] 00:23:52.637 }' 00:23:52.637 13:23:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:52.637 13:23:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:52.637 13:23:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:52.923 13:23:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:52.923 13:23:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:52.923 [2024-07-26 13:23:33.359921] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:52.923 [2024-07-26 13:23:33.360211] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:23:53.219 [2024-07-26 13:23:33.570478] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:23:53.787 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:53.787 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.787 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:53.787 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:53.787 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:53.787 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:53.787 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.787 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.787 [2024-07-26 13:23:34.268300] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:23:54.045 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:54.045 "name": "raid_bdev1", 00:23:54.045 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:54.045 "strip_size_kb": 0, 00:23:54.045 "state": "online", 00:23:54.045 "raid_level": "raid1", 00:23:54.045 "superblock": true, 00:23:54.045 "num_base_bdevs": 2, 00:23:54.045 "num_base_bdevs_discovered": 2, 00:23:54.045 "num_base_bdevs_operational": 2, 00:23:54.045 "process": { 00:23:54.045 "type": "rebuild", 00:23:54.045 "target": "spare", 00:23:54.045 "progress": { 00:23:54.045 "blocks": 59392, 00:23:54.045 "percent": 93 00:23:54.045 } 00:23:54.045 }, 00:23:54.045 "base_bdevs_list": [ 00:23:54.045 { 00:23:54.045 "name": "spare", 00:23:54.045 "uuid": "2ba02ba8-6f8e-5aab-8550-9d8ff9e45678", 00:23:54.045 "is_configured": true, 00:23:54.045 "data_offset": 2048, 00:23:54.045 "data_size": 63488 00:23:54.045 }, 00:23:54.045 { 00:23:54.045 "name": "BaseBdev2", 00:23:54.045 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:54.045 "is_configured": true, 00:23:54.045 "data_offset": 2048, 00:23:54.045 "data_size": 63488 00:23:54.045 } 00:23:54.045 ] 00:23:54.045 }' 00:23:54.045 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:54.045 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:54.045 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:54.045 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:23:54.045 13:23:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:23:54.304 [2024-07-26 13:23:34.604187] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:54.304 [2024-07-26 13:23:34.711821] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:54.304 [2024-07-26 13:23:34.713280] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.239 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:23:55.239 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:55.239 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:55.239 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:23:55.239 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:23:55.239 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:55.239 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.239 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.239 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:55.239 "name": "raid_bdev1", 00:23:55.239 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:55.239 "strip_size_kb": 0, 00:23:55.239 "state": "online", 00:23:55.239 "raid_level": "raid1", 00:23:55.239 "superblock": true, 00:23:55.239 "num_base_bdevs": 2, 00:23:55.239 "num_base_bdevs_discovered": 2, 00:23:55.239 "num_base_bdevs_operational": 2, 00:23:55.239 "base_bdevs_list": [ 00:23:55.239 { 00:23:55.239 "name": "spare", 00:23:55.239 "uuid": "2ba02ba8-6f8e-5aab-8550-9d8ff9e45678", 00:23:55.239 "is_configured": true, 00:23:55.239 "data_offset": 2048, 00:23:55.239 "data_size": 63488 00:23:55.239 }, 00:23:55.239 { 00:23:55.239 "name": "BaseBdev2", 00:23:55.239 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:55.239 "is_configured": true, 00:23:55.239 "data_offset": 2048, 00:23:55.239 "data_size": 63488 00:23:55.239 } 00:23:55.239 ] 00:23:55.239 }' 00:23:55.239 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:55.498 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:55.498 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:55.498 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:23:55.498 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:23:55.498 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:55.498 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:55.498 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:55.498 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:55.498 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:55.498 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.498 13:23:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:55.758 "name": "raid_bdev1", 00:23:55.758 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:55.758 "strip_size_kb": 0, 00:23:55.758 "state": "online", 00:23:55.758 "raid_level": "raid1", 00:23:55.758 "superblock": true, 00:23:55.758 "num_base_bdevs": 2, 00:23:55.758 "num_base_bdevs_discovered": 2, 00:23:55.758 "num_base_bdevs_operational": 2, 00:23:55.758 "base_bdevs_list": [ 00:23:55.758 { 00:23:55.758 "name": "spare", 00:23:55.758 "uuid": "2ba02ba8-6f8e-5aab-8550-9d8ff9e45678", 00:23:55.758 "is_configured": true, 00:23:55.758 "data_offset": 2048, 00:23:55.758 "data_size": 63488 00:23:55.758 }, 00:23:55.758 { 00:23:55.758 "name": "BaseBdev2", 00:23:55.758 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:55.758 "is_configured": true, 00:23:55.758 "data_offset": 2048, 00:23:55.758 "data_size": 63488 00:23:55.758 } 00:23:55.758 ] 00:23:55.758 }' 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.758 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.017 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:56.017 "name": "raid_bdev1", 00:23:56.017 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:56.017 "strip_size_kb": 0, 00:23:56.017 "state": "online", 00:23:56.017 "raid_level": "raid1", 00:23:56.017 "superblock": true, 00:23:56.017 "num_base_bdevs": 2, 00:23:56.017 "num_base_bdevs_discovered": 2, 00:23:56.017 "num_base_bdevs_operational": 2, 00:23:56.017 "base_bdevs_list": [ 00:23:56.017 { 00:23:56.017 "name": "spare", 00:23:56.017 "uuid": "2ba02ba8-6f8e-5aab-8550-9d8ff9e45678", 00:23:56.017 "is_configured": true, 00:23:56.017 "data_offset": 2048, 00:23:56.017 "data_size": 63488 00:23:56.017 }, 00:23:56.017 { 00:23:56.017 "name": "BaseBdev2", 00:23:56.017 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:56.017 "is_configured": true, 00:23:56.017 "data_offset": 2048, 00:23:56.017 "data_size": 63488 00:23:56.017 } 00:23:56.017 ] 00:23:56.017 }' 00:23:56.018 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:56.018 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:56.586 13:23:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:56.845 [2024-07-26 13:23:37.173296] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:56.845 [2024-07-26 13:23:37.173323] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:56.845 00:23:56.845 Latency(us) 00:23:56.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.845 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:56.845 raid_bdev1 : 11.31 98.39 295.18 0.00 0.00 13622.25 270.34 117440.51 00:23:56.845 =================================================================================================================== 00:23:56.845 Total : 98.39 295.18 0.00 0.00 13622.25 270.34 117440.51 00:23:56.845 [2024-07-26 13:23:37.225056] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.845 [2024-07-26 13:23:37.225080] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:56.845 [2024-07-26 13:23:37.225160] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:56.845 [2024-07-26 13:23:37.225172] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16fa270 name raid_bdev1, state offline 00:23:56.845 0 00:23:56.845 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.845 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.104 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:57.363 /dev/nbd0 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:57.363 1+0 records in 00:23:57.363 1+0 records out 00:23:57.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245959 s, 16.7 MB/s 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:57.363 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:23:57.364 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:57.364 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.364 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:23:57.623 /dev/nbd1 00:23:57.623 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:57.623 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:57.623 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:57.623 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:23:57.623 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:57.623 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:57.623 13:23:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:57.623 1+0 records in 00:23:57.623 1+0 records out 00:23:57.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253603 s, 16.2 MB/s 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:57.623 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:57.882 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:58.141 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:58.141 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:58.141 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:58.141 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:58.141 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:58.141 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:58.141 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:23:58.141 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:23:58.141 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:23:58.141 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:58.400 13:23:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:58.659 [2024-07-26 13:23:38.983337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:58.659 [2024-07-26 13:23:38.983379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.659 [2024-07-26 13:23:38.983397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17001b0 00:23:58.659 [2024-07-26 13:23:38.983408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.659 [2024-07-26 13:23:38.984944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.659 [2024-07-26 13:23:38.984970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:58.659 [2024-07-26 13:23:38.985041] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:58.659 [2024-07-26 13:23:38.985067] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:58.659 [2024-07-26 13:23:38.985171] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:58.659 spare 00:23:58.659 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:58.659 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:58.659 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:58.659 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:23:58.659 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:23:58.659 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:23:58.659 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.659 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.659 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.659 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.659 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.659 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.659 [2024-07-26 13:23:39.085480] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x16fdc80 00:23:58.659 [2024-07-26 13:23:39.085498] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:58.659 [2024-07-26 13:23:39.085666] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x189d2b0 00:23:58.659 [2024-07-26 13:23:39.085797] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16fdc80 00:23:58.659 [2024-07-26 13:23:39.085807] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x16fdc80 00:23:58.659 [2024-07-26 13:23:39.085911] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.918 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.918 "name": "raid_bdev1", 00:23:58.918 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:58.918 "strip_size_kb": 0, 00:23:58.918 "state": "online", 00:23:58.918 "raid_level": "raid1", 00:23:58.918 "superblock": true, 00:23:58.918 "num_base_bdevs": 2, 00:23:58.918 "num_base_bdevs_discovered": 2, 00:23:58.918 "num_base_bdevs_operational": 2, 00:23:58.918 "base_bdevs_list": [ 00:23:58.918 { 00:23:58.918 "name": "spare", 00:23:58.918 "uuid": "2ba02ba8-6f8e-5aab-8550-9d8ff9e45678", 00:23:58.918 "is_configured": true, 00:23:58.918 "data_offset": 2048, 00:23:58.918 "data_size": 63488 00:23:58.918 }, 00:23:58.918 { 00:23:58.918 "name": "BaseBdev2", 00:23:58.919 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:58.919 "is_configured": true, 00:23:58.919 "data_offset": 2048, 00:23:58.919 "data_size": 63488 00:23:58.919 } 00:23:58.919 ] 00:23:58.919 }' 00:23:58.919 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.919 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:23:59.486 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:59.486 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:23:59.486 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:23:59.486 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:23:59.486 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:23:59.486 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.486 13:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.745 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:59.745 "name": "raid_bdev1", 00:23:59.745 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:23:59.745 "strip_size_kb": 0, 00:23:59.745 "state": "online", 00:23:59.745 "raid_level": "raid1", 00:23:59.745 "superblock": true, 00:23:59.745 "num_base_bdevs": 2, 00:23:59.745 "num_base_bdevs_discovered": 2, 00:23:59.745 "num_base_bdevs_operational": 2, 00:23:59.745 "base_bdevs_list": [ 00:23:59.745 { 00:23:59.745 "name": "spare", 00:23:59.745 "uuid": "2ba02ba8-6f8e-5aab-8550-9d8ff9e45678", 00:23:59.745 "is_configured": true, 00:23:59.745 "data_offset": 2048, 00:23:59.745 "data_size": 63488 00:23:59.745 }, 00:23:59.745 { 00:23:59.745 "name": "BaseBdev2", 00:23:59.745 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:23:59.745 "is_configured": true, 00:23:59.745 "data_offset": 2048, 00:23:59.745 "data_size": 63488 00:23:59.745 } 00:23:59.745 ] 00:23:59.745 }' 00:23:59.745 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:23:59.745 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:23:59.745 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:23:59.745 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:59.745 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.745 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:00.004 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.004 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:00.261 [2024-07-26 13:23:40.579825] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:00.261 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:00.261 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:00.261 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:00.261 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:00.261 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:00.261 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:00.261 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:00.261 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:00.261 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:00.261 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:00.261 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.261 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.520 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:00.520 "name": "raid_bdev1", 00:24:00.520 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:24:00.520 "strip_size_kb": 0, 00:24:00.520 "state": "online", 00:24:00.520 "raid_level": "raid1", 00:24:00.520 "superblock": true, 00:24:00.520 "num_base_bdevs": 2, 00:24:00.520 "num_base_bdevs_discovered": 1, 00:24:00.520 "num_base_bdevs_operational": 1, 00:24:00.520 "base_bdevs_list": [ 00:24:00.520 { 00:24:00.520 "name": null, 00:24:00.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.520 "is_configured": false, 00:24:00.520 "data_offset": 2048, 00:24:00.520 "data_size": 63488 00:24:00.520 }, 00:24:00.520 { 00:24:00.520 "name": "BaseBdev2", 00:24:00.520 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:24:00.520 "is_configured": true, 00:24:00.520 "data_offset": 2048, 00:24:00.520 "data_size": 63488 00:24:00.520 } 00:24:00.520 ] 00:24:00.520 }' 00:24:00.520 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:00.520 13:23:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:01.088 13:23:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:01.088 [2024-07-26 13:23:41.602647] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:01.089 [2024-07-26 13:23:41.602794] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:01.089 [2024-07-26 13:23:41.602810] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:01.089 [2024-07-26 13:23:41.602837] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:01.089 [2024-07-26 13:23:41.607961] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18a63c0 00:24:01.089 [2024-07-26 13:23:41.610054] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:01.347 13:23:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:24:02.285 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:02.285 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:02.285 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:02.285 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:02.285 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:02.285 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.285 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.545 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:02.545 "name": "raid_bdev1", 00:24:02.545 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:24:02.545 "strip_size_kb": 0, 00:24:02.545 "state": "online", 00:24:02.545 "raid_level": "raid1", 00:24:02.545 "superblock": true, 00:24:02.545 "num_base_bdevs": 2, 00:24:02.545 "num_base_bdevs_discovered": 2, 00:24:02.545 "num_base_bdevs_operational": 2, 00:24:02.545 "process": { 00:24:02.545 "type": "rebuild", 00:24:02.545 "target": "spare", 00:24:02.545 "progress": { 00:24:02.545 "blocks": 24576, 00:24:02.545 "percent": 38 00:24:02.545 } 00:24:02.545 }, 00:24:02.545 "base_bdevs_list": [ 00:24:02.545 { 00:24:02.545 "name": "spare", 00:24:02.545 "uuid": "2ba02ba8-6f8e-5aab-8550-9d8ff9e45678", 00:24:02.545 "is_configured": true, 00:24:02.545 "data_offset": 2048, 00:24:02.545 "data_size": 63488 00:24:02.545 }, 00:24:02.545 { 00:24:02.545 "name": "BaseBdev2", 00:24:02.545 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:24:02.545 "is_configured": true, 00:24:02.545 "data_offset": 2048, 00:24:02.545 "data_size": 63488 00:24:02.545 } 00:24:02.545 ] 00:24:02.545 }' 00:24:02.545 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:02.545 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:02.545 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:02.545 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.545 13:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:02.805 [2024-07-26 13:23:43.152905] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:02.805 [2024-07-26 13:23:43.221886] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:02.805 [2024-07-26 13:23:43.221930] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.805 [2024-07-26 13:23:43.221944] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:02.805 [2024-07-26 13:23:43.221952] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:02.805 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:02.805 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:02.805 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:02.805 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:02.805 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:02.805 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:02.805 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:02.805 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:02.805 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:02.805 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:02.805 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.805 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.064 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:03.064 "name": "raid_bdev1", 00:24:03.064 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:24:03.064 "strip_size_kb": 0, 00:24:03.064 "state": "online", 00:24:03.064 "raid_level": "raid1", 00:24:03.064 "superblock": true, 00:24:03.064 "num_base_bdevs": 2, 00:24:03.064 "num_base_bdevs_discovered": 1, 00:24:03.064 "num_base_bdevs_operational": 1, 00:24:03.064 "base_bdevs_list": [ 00:24:03.064 { 00:24:03.064 "name": null, 00:24:03.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.064 "is_configured": false, 00:24:03.064 "data_offset": 2048, 00:24:03.064 "data_size": 63488 00:24:03.064 }, 00:24:03.064 { 00:24:03.064 "name": "BaseBdev2", 00:24:03.064 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:24:03.064 "is_configured": true, 00:24:03.064 "data_offset": 2048, 00:24:03.064 "data_size": 63488 00:24:03.064 } 00:24:03.064 ] 00:24:03.064 }' 00:24:03.064 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:03.064 13:23:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:03.632 13:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:03.892 [2024-07-26 13:23:44.237162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:03.892 [2024-07-26 13:23:44.237209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.892 [2024-07-26 13:23:44.237230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16fb500 00:24:03.892 [2024-07-26 13:23:44.237241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.892 [2024-07-26 13:23:44.237585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.892 [2024-07-26 13:23:44.237602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:03.892 [2024-07-26 13:23:44.237676] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:03.892 [2024-07-26 13:23:44.237688] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:03.892 [2024-07-26 13:23:44.237698] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:03.892 [2024-07-26 13:23:44.237715] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:03.892 [2024-07-26 13:23:44.242855] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x189d2b0 00:24:03.892 spare 00:24:03.892 [2024-07-26 13:23:44.244212] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:03.892 13:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:24:04.829 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.829 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:04.829 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:04.829 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:04.829 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:04.829 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.829 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.088 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:05.088 "name": "raid_bdev1", 00:24:05.088 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:24:05.088 "strip_size_kb": 0, 00:24:05.088 "state": "online", 00:24:05.088 "raid_level": "raid1", 00:24:05.088 "superblock": true, 00:24:05.088 "num_base_bdevs": 2, 00:24:05.088 "num_base_bdevs_discovered": 2, 00:24:05.088 "num_base_bdevs_operational": 2, 00:24:05.088 "process": { 00:24:05.088 "type": "rebuild", 00:24:05.088 "target": "spare", 00:24:05.088 "progress": { 00:24:05.088 "blocks": 24576, 00:24:05.088 "percent": 38 00:24:05.088 } 00:24:05.088 }, 00:24:05.088 "base_bdevs_list": [ 00:24:05.088 { 00:24:05.088 "name": "spare", 00:24:05.088 "uuid": "2ba02ba8-6f8e-5aab-8550-9d8ff9e45678", 00:24:05.088 "is_configured": true, 00:24:05.088 "data_offset": 2048, 00:24:05.088 "data_size": 63488 00:24:05.088 }, 00:24:05.088 { 00:24:05.088 "name": "BaseBdev2", 00:24:05.089 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:24:05.089 "is_configured": true, 00:24:05.089 "data_offset": 2048, 00:24:05.089 "data_size": 63488 00:24:05.089 } 00:24:05.089 ] 00:24:05.089 }' 00:24:05.089 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:05.089 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.089 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:05.089 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.089 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:05.348 [2024-07-26 13:23:45.783879] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:05.348 [2024-07-26 13:23:45.856034] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:05.348 [2024-07-26 13:23:45.856075] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.348 [2024-07-26 13:23:45.856089] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:05.348 [2024-07-26 13:23:45.856097] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:05.607 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:05.607 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:05.607 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:05.608 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:05.608 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:05.608 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:05.608 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:05.608 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:05.608 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:05.608 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:05.608 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.608 13:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.608 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:05.608 "name": "raid_bdev1", 00:24:05.608 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:24:05.608 "strip_size_kb": 0, 00:24:05.608 "state": "online", 00:24:05.608 "raid_level": "raid1", 00:24:05.608 "superblock": true, 00:24:05.608 "num_base_bdevs": 2, 00:24:05.608 "num_base_bdevs_discovered": 1, 00:24:05.608 "num_base_bdevs_operational": 1, 00:24:05.608 "base_bdevs_list": [ 00:24:05.608 { 00:24:05.608 "name": null, 00:24:05.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.608 "is_configured": false, 00:24:05.608 "data_offset": 2048, 00:24:05.608 "data_size": 63488 00:24:05.608 }, 00:24:05.608 { 00:24:05.608 "name": "BaseBdev2", 00:24:05.608 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:24:05.608 "is_configured": true, 00:24:05.608 "data_offset": 2048, 00:24:05.608 "data_size": 63488 00:24:05.608 } 00:24:05.608 ] 00:24:05.608 }' 00:24:05.608 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:05.608 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:06.176 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:06.176 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:06.176 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:06.176 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:06.176 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:06.176 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.176 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.483 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:06.483 "name": "raid_bdev1", 00:24:06.483 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:24:06.483 "strip_size_kb": 0, 00:24:06.483 "state": "online", 00:24:06.483 "raid_level": "raid1", 00:24:06.483 "superblock": true, 00:24:06.483 "num_base_bdevs": 2, 00:24:06.483 "num_base_bdevs_discovered": 1, 00:24:06.483 "num_base_bdevs_operational": 1, 00:24:06.483 "base_bdevs_list": [ 00:24:06.483 { 00:24:06.483 "name": null, 00:24:06.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.483 "is_configured": false, 00:24:06.483 "data_offset": 2048, 00:24:06.483 "data_size": 63488 00:24:06.483 }, 00:24:06.483 { 00:24:06.483 "name": "BaseBdev2", 00:24:06.483 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:24:06.483 "is_configured": true, 00:24:06.483 "data_offset": 2048, 00:24:06.483 "data_size": 63488 00:24:06.483 } 00:24:06.483 ] 00:24:06.483 }' 00:24:06.483 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:06.483 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:06.483 13:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:06.758 13:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:06.758 13:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:06.758 13:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:07.017 [2024-07-26 13:23:47.449034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:07.017 [2024-07-26 13:23:47.449074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.017 [2024-07-26 13:23:47.449092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16f96a0 00:24:07.017 [2024-07-26 13:23:47.449103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.017 [2024-07-26 13:23:47.449424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.017 [2024-07-26 13:23:47.449440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:07.017 [2024-07-26 13:23:47.449497] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:07.017 [2024-07-26 13:23:47.449509] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:07.017 [2024-07-26 13:23:47.449518] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:07.017 BaseBdev1 00:24:07.017 13:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:24:07.956 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:07.956 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:07.956 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:07.956 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:07.956 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:07.956 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:07.956 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:07.956 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:07.956 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:07.956 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:07.956 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.956 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.215 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:08.215 "name": "raid_bdev1", 00:24:08.215 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:24:08.215 "strip_size_kb": 0, 00:24:08.215 "state": "online", 00:24:08.215 "raid_level": "raid1", 00:24:08.215 "superblock": true, 00:24:08.215 "num_base_bdevs": 2, 00:24:08.215 "num_base_bdevs_discovered": 1, 00:24:08.215 "num_base_bdevs_operational": 1, 00:24:08.215 "base_bdevs_list": [ 00:24:08.215 { 00:24:08.215 "name": null, 00:24:08.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.215 "is_configured": false, 00:24:08.215 "data_offset": 2048, 00:24:08.215 "data_size": 63488 00:24:08.215 }, 00:24:08.215 { 00:24:08.215 "name": "BaseBdev2", 00:24:08.215 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:24:08.215 "is_configured": true, 00:24:08.215 "data_offset": 2048, 00:24:08.215 "data_size": 63488 00:24:08.215 } 00:24:08.215 ] 00:24:08.215 }' 00:24:08.215 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:08.215 13:23:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:08.783 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:08.783 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:08.783 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:08.783 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:08.783 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:08.783 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.783 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.042 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:09.042 "name": "raid_bdev1", 00:24:09.042 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:24:09.042 "strip_size_kb": 0, 00:24:09.042 "state": "online", 00:24:09.042 "raid_level": "raid1", 00:24:09.042 "superblock": true, 00:24:09.042 "num_base_bdevs": 2, 00:24:09.042 "num_base_bdevs_discovered": 1, 00:24:09.042 "num_base_bdevs_operational": 1, 00:24:09.042 "base_bdevs_list": [ 00:24:09.042 { 00:24:09.042 "name": null, 00:24:09.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.042 "is_configured": false, 00:24:09.042 "data_offset": 2048, 00:24:09.042 "data_size": 63488 00:24:09.042 }, 00:24:09.042 { 00:24:09.042 "name": "BaseBdev2", 00:24:09.042 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:24:09.042 "is_configured": true, 00:24:09.042 "data_offset": 2048, 00:24:09.042 "data_size": 63488 00:24:09.042 } 00:24:09.042 ] 00:24:09.042 }' 00:24:09.042 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:09.042 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:09.042 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:09.302 [2024-07-26 13:23:49.807573] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:09.302 [2024-07-26 13:23:49.807690] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:09.302 [2024-07-26 13:23:49.807703] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:09.302 request: 00:24:09.302 { 00:24:09.302 "base_bdev": "BaseBdev1", 00:24:09.302 "raid_bdev": "raid_bdev1", 00:24:09.302 "method": "bdev_raid_add_base_bdev", 00:24:09.302 "req_id": 1 00:24:09.302 } 00:24:09.302 Got JSON-RPC error response 00:24:09.302 response: 00:24:09.302 { 00:24:09.302 "code": -22, 00:24:09.302 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:09.302 } 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:09.302 13:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:24:10.682 13:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:10.682 13:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:10.682 13:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:10.682 13:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:10.682 13:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:10.682 13:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:24:10.682 13:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:10.682 13:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:10.682 13:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:10.682 13:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:10.682 13:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.682 13:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.682 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:10.682 "name": "raid_bdev1", 00:24:10.682 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:24:10.682 "strip_size_kb": 0, 00:24:10.682 "state": "online", 00:24:10.682 "raid_level": "raid1", 00:24:10.682 "superblock": true, 00:24:10.682 "num_base_bdevs": 2, 00:24:10.682 "num_base_bdevs_discovered": 1, 00:24:10.682 "num_base_bdevs_operational": 1, 00:24:10.682 "base_bdevs_list": [ 00:24:10.682 { 00:24:10.682 "name": null, 00:24:10.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.682 "is_configured": false, 00:24:10.682 "data_offset": 2048, 00:24:10.682 "data_size": 63488 00:24:10.682 }, 00:24:10.682 { 00:24:10.682 "name": "BaseBdev2", 00:24:10.682 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:24:10.682 "is_configured": true, 00:24:10.682 "data_offset": 2048, 00:24:10.682 "data_size": 63488 00:24:10.682 } 00:24:10.682 ] 00:24:10.682 }' 00:24:10.682 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:10.682 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:11.251 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:11.251 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:11.251 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:11.251 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:11.251 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:11.251 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.251 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.511 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:11.511 "name": "raid_bdev1", 00:24:11.511 "uuid": "58a55173-69b6-43c2-a0b7-3fccc42753bd", 00:24:11.511 "strip_size_kb": 0, 00:24:11.511 "state": "online", 00:24:11.511 "raid_level": "raid1", 00:24:11.511 "superblock": true, 00:24:11.511 "num_base_bdevs": 2, 00:24:11.511 "num_base_bdevs_discovered": 1, 00:24:11.511 "num_base_bdevs_operational": 1, 00:24:11.511 "base_bdevs_list": [ 00:24:11.511 { 00:24:11.511 "name": null, 00:24:11.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.511 "is_configured": false, 00:24:11.511 "data_offset": 2048, 00:24:11.511 "data_size": 63488 00:24:11.511 }, 00:24:11.511 { 00:24:11.511 "name": "BaseBdev2", 00:24:11.511 "uuid": "539a50b6-a842-525e-b88f-aee6e5db8fb4", 00:24:11.511 "is_configured": true, 00:24:11.511 "data_offset": 2048, 00:24:11.511 "data_size": 63488 00:24:11.511 } 00:24:11.511 ] 00:24:11.511 }' 00:24:11.511 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:11.511 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:11.511 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:11.511 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:11.511 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 793737 00:24:11.511 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 793737 ']' 00:24:11.511 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 793737 00:24:11.511 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:24:11.511 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:11.511 13:23:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 793737 00:24:11.511 13:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:11.511 13:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:11.511 13:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 793737' 00:24:11.511 killing process with pid 793737 00:24:11.511 13:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 793737 00:24:11.511 Received shutdown signal, test time was about 26.059654 seconds 00:24:11.511 00:24:11.511 Latency(us) 00:24:11.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.511 =================================================================================================================== 00:24:11.511 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.511 [2024-07-26 13:23:52.005401] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:11.511 [2024-07-26 13:23:52.005490] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:11.511 [2024-07-26 13:23:52.005531] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:11.511 [2024-07-26 13:23:52.005542] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16fdc80 name raid_bdev1, state offline 00:24:11.511 13:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 793737 00:24:11.511 [2024-07-26 13:23:52.024447] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:11.771 13:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:24:11.771 00:24:11.771 real 0m30.729s 00:24:11.771 user 0m47.703s 00:24:11.771 sys 0m4.405s 00:24:11.771 13:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:11.771 13:23:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:24:11.771 ************************************ 00:24:11.771 END TEST raid_rebuild_test_sb_io 00:24:11.771 ************************************ 00:24:11.771 13:23:52 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:24:11.771 13:23:52 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:24:11.771 13:23:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:11.771 13:23:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:11.771 13:23:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:12.031 ************************************ 00:24:12.031 START TEST raid_rebuild_test 00:24:12.031 ************************************ 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=799473 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 799473 /var/tmp/spdk-raid.sock 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 799473 ']' 00:24:12.031 13:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:12.032 13:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.032 13:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:12.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:12.032 13:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.032 13:23:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.032 [2024-07-26 13:23:52.379489] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:24:12.032 [2024-07-26 13:23:52.379547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid799473 ] 00:24:12.032 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:12.032 Zero copy mechanism will not be used. 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:12.032 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:12.032 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:12.032 [2024-07-26 13:23:52.510571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.291 [2024-07-26 13:23:52.597498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.292 [2024-07-26 13:23:52.661907] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:12.292 [2024-07-26 13:23:52.661940] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:12.869 13:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.869 13:23:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:24:12.869 13:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:12.869 13:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:13.128 BaseBdev1_malloc 00:24:13.128 13:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:13.387 [2024-07-26 13:23:53.722512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:13.388 [2024-07-26 13:23:53.722555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.388 [2024-07-26 13:23:53.722576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13515f0 00:24:13.388 [2024-07-26 13:23:53.722588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.388 [2024-07-26 13:23:53.724098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.388 [2024-07-26 13:23:53.724123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:13.388 BaseBdev1 00:24:13.388 13:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:13.388 13:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:13.647 BaseBdev2_malloc 00:24:13.647 13:23:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:13.906 [2024-07-26 13:23:54.180245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:13.906 [2024-07-26 13:23:54.180285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.906 [2024-07-26 13:23:54.180302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14f5130 00:24:13.907 [2024-07-26 13:23:54.180313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.907 [2024-07-26 13:23:54.181718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.907 [2024-07-26 13:23:54.181743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:13.907 BaseBdev2 00:24:13.907 13:23:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:13.907 13:23:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:13.907 BaseBdev3_malloc 00:24:13.907 13:23:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:14.165 [2024-07-26 13:23:54.625769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:14.165 [2024-07-26 13:23:54.625809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.165 [2024-07-26 13:23:54.625826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14eb420 00:24:14.165 [2024-07-26 13:23:54.625838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.165 [2024-07-26 13:23:54.627162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.165 [2024-07-26 13:23:54.627187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:14.165 BaseBdev3 00:24:14.165 13:23:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:14.165 13:23:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:14.424 BaseBdev4_malloc 00:24:14.425 13:23:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:14.684 [2024-07-26 13:23:55.059071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:14.684 [2024-07-26 13:23:55.059112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.684 [2024-07-26 13:23:55.059130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14ebd40 00:24:14.684 [2024-07-26 13:23:55.059148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.684 [2024-07-26 13:23:55.060454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.684 [2024-07-26 13:23:55.060480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:14.684 BaseBdev4 00:24:14.684 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:14.943 spare_malloc 00:24:14.943 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:15.202 spare_delay 00:24:15.202 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:15.202 [2024-07-26 13:23:55.704928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:15.202 [2024-07-26 13:23:55.704964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.202 [2024-07-26 13:23:55.704982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x134adb0 00:24:15.202 [2024-07-26 13:23:55.704993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.202 [2024-07-26 13:23:55.706318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.202 [2024-07-26 13:23:55.706344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:15.202 spare 00:24:15.202 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:15.461 [2024-07-26 13:23:55.929541] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:15.461 [2024-07-26 13:23:55.930625] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:15.461 [2024-07-26 13:23:55.930672] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:15.461 [2024-07-26 13:23:55.930715] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:15.461 [2024-07-26 13:23:55.930791] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x134d5b0 00:24:15.461 [2024-07-26 13:23:55.930800] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:15.461 [2024-07-26 13:23:55.930982] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1350380 00:24:15.461 [2024-07-26 13:23:55.931114] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x134d5b0 00:24:15.461 [2024-07-26 13:23:55.931123] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x134d5b0 00:24:15.461 [2024-07-26 13:23:55.931229] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:15.461 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:15.461 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:15.461 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:15.461 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:15.461 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:15.461 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:15.461 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:15.461 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:15.461 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:15.461 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:15.461 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.461 13:23:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.720 13:23:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:15.720 "name": "raid_bdev1", 00:24:15.720 "uuid": "d8e166c0-b8a7-45cd-82b5-52b8f534747e", 00:24:15.720 "strip_size_kb": 0, 00:24:15.720 "state": "online", 00:24:15.720 "raid_level": "raid1", 00:24:15.720 "superblock": false, 00:24:15.720 "num_base_bdevs": 4, 00:24:15.720 "num_base_bdevs_discovered": 4, 00:24:15.720 "num_base_bdevs_operational": 4, 00:24:15.720 "base_bdevs_list": [ 00:24:15.720 { 00:24:15.720 "name": "BaseBdev1", 00:24:15.720 "uuid": "e1121bc2-635e-590a-95ff-9f974277ae0a", 00:24:15.720 "is_configured": true, 00:24:15.720 "data_offset": 0, 00:24:15.720 "data_size": 65536 00:24:15.720 }, 00:24:15.720 { 00:24:15.720 "name": "BaseBdev2", 00:24:15.720 "uuid": "0813e75a-5550-566c-98c2-a597158df744", 00:24:15.720 "is_configured": true, 00:24:15.720 "data_offset": 0, 00:24:15.720 "data_size": 65536 00:24:15.720 }, 00:24:15.720 { 00:24:15.720 "name": "BaseBdev3", 00:24:15.720 "uuid": "da8d3ce7-71d5-5a45-ab89-e89938fc91c0", 00:24:15.720 "is_configured": true, 00:24:15.720 "data_offset": 0, 00:24:15.720 "data_size": 65536 00:24:15.720 }, 00:24:15.720 { 00:24:15.720 "name": "BaseBdev4", 00:24:15.720 "uuid": "67e92a0c-06be-5455-8cc2-8e272b66aa27", 00:24:15.720 "is_configured": true, 00:24:15.720 "data_offset": 0, 00:24:15.720 "data_size": 65536 00:24:15.720 } 00:24:15.720 ] 00:24:15.720 }' 00:24:15.720 13:23:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:15.720 13:23:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.288 13:23:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:24:16.288 13:23:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:16.547 [2024-07-26 13:23:56.980572] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:16.547 13:23:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:24:16.547 13:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.547 13:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:16.806 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:17.065 [2024-07-26 13:23:57.437533] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x134d080 00:24:17.065 /dev/nbd0 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:17.065 1+0 records in 00:24:17.065 1+0 records out 00:24:17.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239105 s, 17.1 MB/s 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:17.065 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:17.066 13:23:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:24:17.066 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:17.066 13:23:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:17.066 13:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:24:17.066 13:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:24:17.066 13:23:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:24:23.630 65536+0 records in 00:24:23.630 65536+0 records out 00:24:23.630 33554432 bytes (34 MB, 32 MiB) copied, 5.81246 s, 5.8 MB/s 00:24:23.630 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:23.630 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:23.631 [2024-07-26 13:24:03.565328] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:23.631 [2024-07-26 13:24:03.781928] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.631 13:24:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.631 13:24:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:23.631 "name": "raid_bdev1", 00:24:23.631 "uuid": "d8e166c0-b8a7-45cd-82b5-52b8f534747e", 00:24:23.631 "strip_size_kb": 0, 00:24:23.631 "state": "online", 00:24:23.631 "raid_level": "raid1", 00:24:23.631 "superblock": false, 00:24:23.631 "num_base_bdevs": 4, 00:24:23.631 "num_base_bdevs_discovered": 3, 00:24:23.631 "num_base_bdevs_operational": 3, 00:24:23.631 "base_bdevs_list": [ 00:24:23.631 { 00:24:23.631 "name": null, 00:24:23.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.631 "is_configured": false, 00:24:23.631 "data_offset": 0, 00:24:23.631 "data_size": 65536 00:24:23.631 }, 00:24:23.631 { 00:24:23.631 "name": "BaseBdev2", 00:24:23.631 "uuid": "0813e75a-5550-566c-98c2-a597158df744", 00:24:23.631 "is_configured": true, 00:24:23.631 "data_offset": 0, 00:24:23.631 "data_size": 65536 00:24:23.631 }, 00:24:23.631 { 00:24:23.631 "name": "BaseBdev3", 00:24:23.631 "uuid": "da8d3ce7-71d5-5a45-ab89-e89938fc91c0", 00:24:23.631 "is_configured": true, 00:24:23.631 "data_offset": 0, 00:24:23.631 "data_size": 65536 00:24:23.631 }, 00:24:23.631 { 00:24:23.631 "name": "BaseBdev4", 00:24:23.631 "uuid": "67e92a0c-06be-5455-8cc2-8e272b66aa27", 00:24:23.631 "is_configured": true, 00:24:23.631 "data_offset": 0, 00:24:23.631 "data_size": 65536 00:24:23.631 } 00:24:23.631 ] 00:24:23.631 }' 00:24:23.631 13:24:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:23.631 13:24:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.227 13:24:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:24.486 [2024-07-26 13:24:04.812663] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:24.486 [2024-07-26 13:24:04.816564] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13504a0 00:24:24.486 [2024-07-26 13:24:04.818636] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:24.486 13:24:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:25.422 13:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.422 13:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:25.422 13:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:25.422 13:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:25.422 13:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:25.422 13:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.422 13:24:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.681 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:25.681 "name": "raid_bdev1", 00:24:25.681 "uuid": "d8e166c0-b8a7-45cd-82b5-52b8f534747e", 00:24:25.681 "strip_size_kb": 0, 00:24:25.681 "state": "online", 00:24:25.681 "raid_level": "raid1", 00:24:25.681 "superblock": false, 00:24:25.681 "num_base_bdevs": 4, 00:24:25.681 "num_base_bdevs_discovered": 4, 00:24:25.681 "num_base_bdevs_operational": 4, 00:24:25.681 "process": { 00:24:25.681 "type": "rebuild", 00:24:25.681 "target": "spare", 00:24:25.681 "progress": { 00:24:25.681 "blocks": 24576, 00:24:25.681 "percent": 37 00:24:25.681 } 00:24:25.681 }, 00:24:25.681 "base_bdevs_list": [ 00:24:25.681 { 00:24:25.681 "name": "spare", 00:24:25.681 "uuid": "f03ac490-e600-5e73-af33-d1c9915abba2", 00:24:25.681 "is_configured": true, 00:24:25.681 "data_offset": 0, 00:24:25.681 "data_size": 65536 00:24:25.681 }, 00:24:25.681 { 00:24:25.681 "name": "BaseBdev2", 00:24:25.681 "uuid": "0813e75a-5550-566c-98c2-a597158df744", 00:24:25.681 "is_configured": true, 00:24:25.681 "data_offset": 0, 00:24:25.681 "data_size": 65536 00:24:25.681 }, 00:24:25.681 { 00:24:25.681 "name": "BaseBdev3", 00:24:25.681 "uuid": "da8d3ce7-71d5-5a45-ab89-e89938fc91c0", 00:24:25.681 "is_configured": true, 00:24:25.681 "data_offset": 0, 00:24:25.681 "data_size": 65536 00:24:25.681 }, 00:24:25.681 { 00:24:25.681 "name": "BaseBdev4", 00:24:25.681 "uuid": "67e92a0c-06be-5455-8cc2-8e272b66aa27", 00:24:25.681 "is_configured": true, 00:24:25.681 "data_offset": 0, 00:24:25.681 "data_size": 65536 00:24:25.681 } 00:24:25.681 ] 00:24:25.681 }' 00:24:25.681 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:25.681 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.681 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:25.681 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.681 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:25.939 [2024-07-26 13:24:06.367615] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:25.939 [2024-07-26 13:24:06.430355] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:25.939 [2024-07-26 13:24:06.430395] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:25.939 [2024-07-26 13:24:06.430410] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:25.939 [2024-07-26 13:24:06.430418] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:25.939 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:25.939 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:25.939 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:25.939 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:25.939 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:25.939 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:25.939 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:25.939 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:25.939 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:25.939 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:25.939 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.940 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.198 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:26.198 "name": "raid_bdev1", 00:24:26.198 "uuid": "d8e166c0-b8a7-45cd-82b5-52b8f534747e", 00:24:26.198 "strip_size_kb": 0, 00:24:26.198 "state": "online", 00:24:26.198 "raid_level": "raid1", 00:24:26.198 "superblock": false, 00:24:26.198 "num_base_bdevs": 4, 00:24:26.198 "num_base_bdevs_discovered": 3, 00:24:26.198 "num_base_bdevs_operational": 3, 00:24:26.198 "base_bdevs_list": [ 00:24:26.198 { 00:24:26.198 "name": null, 00:24:26.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.198 "is_configured": false, 00:24:26.198 "data_offset": 0, 00:24:26.198 "data_size": 65536 00:24:26.198 }, 00:24:26.198 { 00:24:26.198 "name": "BaseBdev2", 00:24:26.198 "uuid": "0813e75a-5550-566c-98c2-a597158df744", 00:24:26.198 "is_configured": true, 00:24:26.198 "data_offset": 0, 00:24:26.198 "data_size": 65536 00:24:26.198 }, 00:24:26.198 { 00:24:26.198 "name": "BaseBdev3", 00:24:26.198 "uuid": "da8d3ce7-71d5-5a45-ab89-e89938fc91c0", 00:24:26.198 "is_configured": true, 00:24:26.198 "data_offset": 0, 00:24:26.198 "data_size": 65536 00:24:26.198 }, 00:24:26.198 { 00:24:26.198 "name": "BaseBdev4", 00:24:26.198 "uuid": "67e92a0c-06be-5455-8cc2-8e272b66aa27", 00:24:26.198 "is_configured": true, 00:24:26.198 "data_offset": 0, 00:24:26.198 "data_size": 65536 00:24:26.198 } 00:24:26.198 ] 00:24:26.198 }' 00:24:26.198 13:24:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:26.198 13:24:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:26.764 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:26.764 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:26.764 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:26.764 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:26.764 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:26.764 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:26.764 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.023 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:27.023 "name": "raid_bdev1", 00:24:27.023 "uuid": "d8e166c0-b8a7-45cd-82b5-52b8f534747e", 00:24:27.023 "strip_size_kb": 0, 00:24:27.023 "state": "online", 00:24:27.023 "raid_level": "raid1", 00:24:27.023 "superblock": false, 00:24:27.023 "num_base_bdevs": 4, 00:24:27.023 "num_base_bdevs_discovered": 3, 00:24:27.023 "num_base_bdevs_operational": 3, 00:24:27.023 "base_bdevs_list": [ 00:24:27.023 { 00:24:27.023 "name": null, 00:24:27.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.023 "is_configured": false, 00:24:27.023 "data_offset": 0, 00:24:27.023 "data_size": 65536 00:24:27.023 }, 00:24:27.023 { 00:24:27.023 "name": "BaseBdev2", 00:24:27.023 "uuid": "0813e75a-5550-566c-98c2-a597158df744", 00:24:27.023 "is_configured": true, 00:24:27.023 "data_offset": 0, 00:24:27.023 "data_size": 65536 00:24:27.023 }, 00:24:27.023 { 00:24:27.023 "name": "BaseBdev3", 00:24:27.023 "uuid": "da8d3ce7-71d5-5a45-ab89-e89938fc91c0", 00:24:27.023 "is_configured": true, 00:24:27.023 "data_offset": 0, 00:24:27.023 "data_size": 65536 00:24:27.023 }, 00:24:27.023 { 00:24:27.023 "name": "BaseBdev4", 00:24:27.023 "uuid": "67e92a0c-06be-5455-8cc2-8e272b66aa27", 00:24:27.023 "is_configured": true, 00:24:27.023 "data_offset": 0, 00:24:27.023 "data_size": 65536 00:24:27.023 } 00:24:27.023 ] 00:24:27.023 }' 00:24:27.024 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:27.024 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:27.283 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:27.283 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:27.283 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:27.283 [2024-07-26 13:24:07.797822] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:27.283 [2024-07-26 13:24:07.801730] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1349610 00:24:27.283 [2024-07-26 13:24:07.803120] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:27.542 13:24:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:24:28.476 13:24:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:28.476 13:24:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:28.476 13:24:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:28.476 13:24:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:28.476 13:24:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:28.476 13:24:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.476 13:24:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:28.733 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:28.733 "name": "raid_bdev1", 00:24:28.733 "uuid": "d8e166c0-b8a7-45cd-82b5-52b8f534747e", 00:24:28.733 "strip_size_kb": 0, 00:24:28.733 "state": "online", 00:24:28.733 "raid_level": "raid1", 00:24:28.733 "superblock": false, 00:24:28.733 "num_base_bdevs": 4, 00:24:28.733 "num_base_bdevs_discovered": 4, 00:24:28.733 "num_base_bdevs_operational": 4, 00:24:28.733 "process": { 00:24:28.733 "type": "rebuild", 00:24:28.733 "target": "spare", 00:24:28.733 "progress": { 00:24:28.733 "blocks": 24576, 00:24:28.733 "percent": 37 00:24:28.733 } 00:24:28.733 }, 00:24:28.733 "base_bdevs_list": [ 00:24:28.733 { 00:24:28.733 "name": "spare", 00:24:28.733 "uuid": "f03ac490-e600-5e73-af33-d1c9915abba2", 00:24:28.733 "is_configured": true, 00:24:28.733 "data_offset": 0, 00:24:28.733 "data_size": 65536 00:24:28.733 }, 00:24:28.733 { 00:24:28.733 "name": "BaseBdev2", 00:24:28.733 "uuid": "0813e75a-5550-566c-98c2-a597158df744", 00:24:28.733 "is_configured": true, 00:24:28.733 "data_offset": 0, 00:24:28.733 "data_size": 65536 00:24:28.733 }, 00:24:28.733 { 00:24:28.733 "name": "BaseBdev3", 00:24:28.733 "uuid": "da8d3ce7-71d5-5a45-ab89-e89938fc91c0", 00:24:28.733 "is_configured": true, 00:24:28.733 "data_offset": 0, 00:24:28.733 "data_size": 65536 00:24:28.733 }, 00:24:28.733 { 00:24:28.733 "name": "BaseBdev4", 00:24:28.733 "uuid": "67e92a0c-06be-5455-8cc2-8e272b66aa27", 00:24:28.733 "is_configured": true, 00:24:28.733 "data_offset": 0, 00:24:28.733 "data_size": 65536 00:24:28.733 } 00:24:28.733 ] 00:24:28.733 }' 00:24:28.733 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:28.733 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:28.733 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:28.733 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:28.733 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:24:28.733 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:24:28.733 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:24:28.733 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:24:28.733 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:28.992 [2024-07-26 13:24:09.343723] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:28.992 [2024-07-26 13:24:09.414916] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x1349610 00:24:28.992 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:24:28.992 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:24:28.992 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:28.992 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:28.992 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:28.992 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:28.992 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:28.992 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.992 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.250 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:29.251 "name": "raid_bdev1", 00:24:29.251 "uuid": "d8e166c0-b8a7-45cd-82b5-52b8f534747e", 00:24:29.251 "strip_size_kb": 0, 00:24:29.251 "state": "online", 00:24:29.251 "raid_level": "raid1", 00:24:29.251 "superblock": false, 00:24:29.251 "num_base_bdevs": 4, 00:24:29.251 "num_base_bdevs_discovered": 3, 00:24:29.251 "num_base_bdevs_operational": 3, 00:24:29.251 "process": { 00:24:29.251 "type": "rebuild", 00:24:29.251 "target": "spare", 00:24:29.251 "progress": { 00:24:29.251 "blocks": 36864, 00:24:29.251 "percent": 56 00:24:29.251 } 00:24:29.251 }, 00:24:29.251 "base_bdevs_list": [ 00:24:29.251 { 00:24:29.251 "name": "spare", 00:24:29.251 "uuid": "f03ac490-e600-5e73-af33-d1c9915abba2", 00:24:29.251 "is_configured": true, 00:24:29.251 "data_offset": 0, 00:24:29.251 "data_size": 65536 00:24:29.251 }, 00:24:29.251 { 00:24:29.251 "name": null, 00:24:29.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.251 "is_configured": false, 00:24:29.251 "data_offset": 0, 00:24:29.251 "data_size": 65536 00:24:29.251 }, 00:24:29.251 { 00:24:29.251 "name": "BaseBdev3", 00:24:29.251 "uuid": "da8d3ce7-71d5-5a45-ab89-e89938fc91c0", 00:24:29.251 "is_configured": true, 00:24:29.251 "data_offset": 0, 00:24:29.251 "data_size": 65536 00:24:29.251 }, 00:24:29.251 { 00:24:29.251 "name": "BaseBdev4", 00:24:29.251 "uuid": "67e92a0c-06be-5455-8cc2-8e272b66aa27", 00:24:29.251 "is_configured": true, 00:24:29.251 "data_offset": 0, 00:24:29.251 "data_size": 65536 00:24:29.251 } 00:24:29.251 ] 00:24:29.251 }' 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=835 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.251 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:29.510 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:29.510 "name": "raid_bdev1", 00:24:29.510 "uuid": "d8e166c0-b8a7-45cd-82b5-52b8f534747e", 00:24:29.510 "strip_size_kb": 0, 00:24:29.510 "state": "online", 00:24:29.510 "raid_level": "raid1", 00:24:29.510 "superblock": false, 00:24:29.510 "num_base_bdevs": 4, 00:24:29.510 "num_base_bdevs_discovered": 3, 00:24:29.510 "num_base_bdevs_operational": 3, 00:24:29.510 "process": { 00:24:29.510 "type": "rebuild", 00:24:29.510 "target": "spare", 00:24:29.510 "progress": { 00:24:29.510 "blocks": 43008, 00:24:29.510 "percent": 65 00:24:29.510 } 00:24:29.510 }, 00:24:29.510 "base_bdevs_list": [ 00:24:29.510 { 00:24:29.510 "name": "spare", 00:24:29.510 "uuid": "f03ac490-e600-5e73-af33-d1c9915abba2", 00:24:29.510 "is_configured": true, 00:24:29.510 "data_offset": 0, 00:24:29.510 "data_size": 65536 00:24:29.510 }, 00:24:29.510 { 00:24:29.510 "name": null, 00:24:29.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.510 "is_configured": false, 00:24:29.510 "data_offset": 0, 00:24:29.510 "data_size": 65536 00:24:29.510 }, 00:24:29.510 { 00:24:29.510 "name": "BaseBdev3", 00:24:29.510 "uuid": "da8d3ce7-71d5-5a45-ab89-e89938fc91c0", 00:24:29.510 "is_configured": true, 00:24:29.510 "data_offset": 0, 00:24:29.510 "data_size": 65536 00:24:29.510 }, 00:24:29.510 { 00:24:29.510 "name": "BaseBdev4", 00:24:29.510 "uuid": "67e92a0c-06be-5455-8cc2-8e272b66aa27", 00:24:29.510 "is_configured": true, 00:24:29.510 "data_offset": 0, 00:24:29.510 "data_size": 65536 00:24:29.510 } 00:24:29.510 ] 00:24:29.510 }' 00:24:29.510 13:24:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:29.510 13:24:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:29.510 13:24:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:29.768 13:24:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:29.768 13:24:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:30.701 [2024-07-26 13:24:11.026660] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:30.701 [2024-07-26 13:24:11.026713] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:30.701 [2024-07-26 13:24:11.026748] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:30.701 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:30.701 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:30.701 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:30.701 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:30.701 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:30.701 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:30.701 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.701 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.958 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:30.958 "name": "raid_bdev1", 00:24:30.958 "uuid": "d8e166c0-b8a7-45cd-82b5-52b8f534747e", 00:24:30.958 "strip_size_kb": 0, 00:24:30.958 "state": "online", 00:24:30.958 "raid_level": "raid1", 00:24:30.958 "superblock": false, 00:24:30.958 "num_base_bdevs": 4, 00:24:30.958 "num_base_bdevs_discovered": 3, 00:24:30.958 "num_base_bdevs_operational": 3, 00:24:30.958 "base_bdevs_list": [ 00:24:30.958 { 00:24:30.958 "name": "spare", 00:24:30.958 "uuid": "f03ac490-e600-5e73-af33-d1c9915abba2", 00:24:30.958 "is_configured": true, 00:24:30.958 "data_offset": 0, 00:24:30.958 "data_size": 65536 00:24:30.958 }, 00:24:30.958 { 00:24:30.958 "name": null, 00:24:30.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.958 "is_configured": false, 00:24:30.958 "data_offset": 0, 00:24:30.958 "data_size": 65536 00:24:30.958 }, 00:24:30.958 { 00:24:30.958 "name": "BaseBdev3", 00:24:30.958 "uuid": "da8d3ce7-71d5-5a45-ab89-e89938fc91c0", 00:24:30.958 "is_configured": true, 00:24:30.958 "data_offset": 0, 00:24:30.958 "data_size": 65536 00:24:30.958 }, 00:24:30.958 { 00:24:30.958 "name": "BaseBdev4", 00:24:30.958 "uuid": "67e92a0c-06be-5455-8cc2-8e272b66aa27", 00:24:30.958 "is_configured": true, 00:24:30.958 "data_offset": 0, 00:24:30.958 "data_size": 65536 00:24:30.958 } 00:24:30.958 ] 00:24:30.958 }' 00:24:30.958 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:30.958 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:30.958 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:30.958 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:30.958 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:24:30.958 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:30.958 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:30.958 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:30.958 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:30.958 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:30.958 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.959 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.216 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:31.216 "name": "raid_bdev1", 00:24:31.216 "uuid": "d8e166c0-b8a7-45cd-82b5-52b8f534747e", 00:24:31.216 "strip_size_kb": 0, 00:24:31.216 "state": "online", 00:24:31.216 "raid_level": "raid1", 00:24:31.216 "superblock": false, 00:24:31.216 "num_base_bdevs": 4, 00:24:31.216 "num_base_bdevs_discovered": 3, 00:24:31.216 "num_base_bdevs_operational": 3, 00:24:31.216 "base_bdevs_list": [ 00:24:31.216 { 00:24:31.216 "name": "spare", 00:24:31.216 "uuid": "f03ac490-e600-5e73-af33-d1c9915abba2", 00:24:31.216 "is_configured": true, 00:24:31.216 "data_offset": 0, 00:24:31.216 "data_size": 65536 00:24:31.216 }, 00:24:31.216 { 00:24:31.216 "name": null, 00:24:31.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.216 "is_configured": false, 00:24:31.216 "data_offset": 0, 00:24:31.216 "data_size": 65536 00:24:31.216 }, 00:24:31.216 { 00:24:31.216 "name": "BaseBdev3", 00:24:31.216 "uuid": "da8d3ce7-71d5-5a45-ab89-e89938fc91c0", 00:24:31.216 "is_configured": true, 00:24:31.216 "data_offset": 0, 00:24:31.216 "data_size": 65536 00:24:31.216 }, 00:24:31.216 { 00:24:31.216 "name": "BaseBdev4", 00:24:31.216 "uuid": "67e92a0c-06be-5455-8cc2-8e272b66aa27", 00:24:31.216 "is_configured": true, 00:24:31.216 "data_offset": 0, 00:24:31.216 "data_size": 65536 00:24:31.216 } 00:24:31.216 ] 00:24:31.216 }' 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.217 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.476 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:31.476 "name": "raid_bdev1", 00:24:31.476 "uuid": "d8e166c0-b8a7-45cd-82b5-52b8f534747e", 00:24:31.476 "strip_size_kb": 0, 00:24:31.476 "state": "online", 00:24:31.476 "raid_level": "raid1", 00:24:31.476 "superblock": false, 00:24:31.476 "num_base_bdevs": 4, 00:24:31.476 "num_base_bdevs_discovered": 3, 00:24:31.476 "num_base_bdevs_operational": 3, 00:24:31.476 "base_bdevs_list": [ 00:24:31.476 { 00:24:31.476 "name": "spare", 00:24:31.476 "uuid": "f03ac490-e600-5e73-af33-d1c9915abba2", 00:24:31.476 "is_configured": true, 00:24:31.476 "data_offset": 0, 00:24:31.476 "data_size": 65536 00:24:31.476 }, 00:24:31.476 { 00:24:31.476 "name": null, 00:24:31.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.476 "is_configured": false, 00:24:31.476 "data_offset": 0, 00:24:31.476 "data_size": 65536 00:24:31.476 }, 00:24:31.476 { 00:24:31.476 "name": "BaseBdev3", 00:24:31.476 "uuid": "da8d3ce7-71d5-5a45-ab89-e89938fc91c0", 00:24:31.476 "is_configured": true, 00:24:31.476 "data_offset": 0, 00:24:31.476 "data_size": 65536 00:24:31.476 }, 00:24:31.476 { 00:24:31.476 "name": "BaseBdev4", 00:24:31.476 "uuid": "67e92a0c-06be-5455-8cc2-8e272b66aa27", 00:24:31.476 "is_configured": true, 00:24:31.476 "data_offset": 0, 00:24:31.476 "data_size": 65536 00:24:31.476 } 00:24:31.476 ] 00:24:31.476 }' 00:24:31.476 13:24:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:31.476 13:24:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.043 13:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:32.301 [2024-07-26 13:24:12.662171] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:32.301 [2024-07-26 13:24:12.662197] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:32.301 [2024-07-26 13:24:12.662251] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:32.301 [2024-07-26 13:24:12.662313] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:32.301 [2024-07-26 13:24:12.662324] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x134d5b0 name raid_bdev1, state offline 00:24:32.301 13:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.301 13:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:32.559 13:24:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:32.818 /dev/nbd0 00:24:32.818 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:32.818 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:32.818 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:32.818 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:24:32.818 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:32.818 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:32.818 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:32.818 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:24:32.818 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:32.818 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:32.819 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:32.819 1+0 records in 00:24:32.819 1+0 records out 00:24:32.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239939 s, 17.1 MB/s 00:24:32.819 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:32.819 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:24:32.819 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:32.819 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:32.819 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:24:32.819 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:32.819 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:32.819 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:33.078 /dev/nbd1 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:33.078 1+0 records in 00:24:33.078 1+0 records out 00:24:33.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219736 s, 18.6 MB/s 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:33.078 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:33.336 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:33.336 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:33.336 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:33.336 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:33.336 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:33.336 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:33.336 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:33.336 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:33.336 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:33.336 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 799473 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 799473 ']' 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 799473 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 799473 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 799473' 00:24:33.596 killing process with pid 799473 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 799473 00:24:33.596 Received shutdown signal, test time was about 60.000000 seconds 00:24:33.596 00:24:33.596 Latency(us) 00:24:33.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.596 =================================================================================================================== 00:24:33.596 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:33.596 [2024-07-26 13:24:13.992930] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:33.596 13:24:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 799473 00:24:33.596 [2024-07-26 13:24:14.031792] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:24:33.855 00:24:33.855 real 0m21.907s 00:24:33.855 user 0m30.267s 00:24:33.855 sys 0m4.419s 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:24:33.855 ************************************ 00:24:33.855 END TEST raid_rebuild_test 00:24:33.855 ************************************ 00:24:33.855 13:24:14 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:24:33.855 13:24:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:24:33.855 13:24:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:33.855 13:24:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:33.855 ************************************ 00:24:33.855 START TEST raid_rebuild_test_sb 00:24:33.855 ************************************ 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:33.855 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=803834 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 803834 /var/tmp/spdk-raid.sock 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 803834 ']' 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:33.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.856 13:24:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.856 [2024-07-26 13:24:14.351255] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:24:33.856 [2024-07-26 13:24:14.351312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803834 ] 00:24:33.856 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:33.856 Zero copy mechanism will not be used. 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:34.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:34.115 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:34.115 [2024-07-26 13:24:14.480476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.115 [2024-07-26 13:24:14.566468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.115 [2024-07-26 13:24:14.631832] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:34.115 [2024-07-26 13:24:14.631867] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:35.053 13:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:35.053 13:24:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:24:35.053 13:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:35.053 13:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:35.053 BaseBdev1_malloc 00:24:35.053 13:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:35.312 [2024-07-26 13:24:15.705656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:35.312 [2024-07-26 13:24:15.705698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.312 [2024-07-26 13:24:15.705724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9eb5f0 00:24:35.312 [2024-07-26 13:24:15.705736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.312 [2024-07-26 13:24:15.707327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.312 [2024-07-26 13:24:15.707353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:35.312 BaseBdev1 00:24:35.312 13:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:35.312 13:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:35.571 BaseBdev2_malloc 00:24:35.571 13:24:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:35.831 [2024-07-26 13:24:16.155263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:35.831 [2024-07-26 13:24:16.155304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:35.831 [2024-07-26 13:24:16.155321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb8f130 00:24:35.831 [2024-07-26 13:24:16.155333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:35.831 [2024-07-26 13:24:16.156745] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:35.831 [2024-07-26 13:24:16.156772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:35.831 BaseBdev2 00:24:35.831 13:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:35.831 13:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:36.091 BaseBdev3_malloc 00:24:36.091 13:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:36.091 [2024-07-26 13:24:16.612686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:36.091 [2024-07-26 13:24:16.612726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.091 [2024-07-26 13:24:16.612743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb85420 00:24:36.091 [2024-07-26 13:24:16.612755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.091 [2024-07-26 13:24:16.614120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.091 [2024-07-26 13:24:16.614153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:36.091 BaseBdev3 00:24:36.349 13:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:24:36.349 13:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:36.350 BaseBdev4_malloc 00:24:36.350 13:24:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:36.609 [2024-07-26 13:24:17.074247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:36.609 [2024-07-26 13:24:17.074287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:36.609 [2024-07-26 13:24:17.074305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb85d40 00:24:36.609 [2024-07-26 13:24:17.074316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:36.609 [2024-07-26 13:24:17.075671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:36.609 [2024-07-26 13:24:17.075697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:36.609 BaseBdev4 00:24:36.609 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:36.868 spare_malloc 00:24:36.868 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:37.176 spare_delay 00:24:37.176 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:37.447 [2024-07-26 13:24:17.752391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:37.447 [2024-07-26 13:24:17.752433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.447 [2024-07-26 13:24:17.752452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9e4db0 00:24:37.447 [2024-07-26 13:24:17.752464] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.447 [2024-07-26 13:24:17.753875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.447 [2024-07-26 13:24:17.753901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:37.447 spare 00:24:37.447 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:37.447 [2024-07-26 13:24:17.968999] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:37.447 [2024-07-26 13:24:17.970159] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:37.447 [2024-07-26 13:24:17.970208] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:37.447 [2024-07-26 13:24:17.970250] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:37.447 [2024-07-26 13:24:17.970415] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x9e75b0 00:24:37.447 [2024-07-26 13:24:17.970425] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:37.447 [2024-07-26 13:24:17.970609] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9ea3d0 00:24:37.447 [2024-07-26 13:24:17.970743] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9e75b0 00:24:37.447 [2024-07-26 13:24:17.970753] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9e75b0 00:24:37.447 [2024-07-26 13:24:17.970853] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.706 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:24:37.706 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:37.706 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:37.706 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:37.706 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:37.706 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:37.706 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:37.706 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:37.706 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:37.706 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:37.706 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.706 13:24:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.706 13:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.706 "name": "raid_bdev1", 00:24:37.706 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:37.706 "strip_size_kb": 0, 00:24:37.706 "state": "online", 00:24:37.706 "raid_level": "raid1", 00:24:37.706 "superblock": true, 00:24:37.706 "num_base_bdevs": 4, 00:24:37.706 "num_base_bdevs_discovered": 4, 00:24:37.706 "num_base_bdevs_operational": 4, 00:24:37.706 "base_bdevs_list": [ 00:24:37.706 { 00:24:37.706 "name": "BaseBdev1", 00:24:37.706 "uuid": "b852ba28-76ee-57ed-b4c4-3b29cbcdec5b", 00:24:37.706 "is_configured": true, 00:24:37.706 "data_offset": 2048, 00:24:37.706 "data_size": 63488 00:24:37.706 }, 00:24:37.706 { 00:24:37.706 "name": "BaseBdev2", 00:24:37.706 "uuid": "ecdbda9d-9cbc-5076-981a-3cd5b87f0287", 00:24:37.706 "is_configured": true, 00:24:37.706 "data_offset": 2048, 00:24:37.706 "data_size": 63488 00:24:37.706 }, 00:24:37.706 { 00:24:37.706 "name": "BaseBdev3", 00:24:37.706 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:37.706 "is_configured": true, 00:24:37.706 "data_offset": 2048, 00:24:37.706 "data_size": 63488 00:24:37.706 }, 00:24:37.706 { 00:24:37.706 "name": "BaseBdev4", 00:24:37.706 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:37.706 "is_configured": true, 00:24:37.706 "data_offset": 2048, 00:24:37.706 "data_size": 63488 00:24:37.706 } 00:24:37.706 ] 00:24:37.706 }' 00:24:37.706 13:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.706 13:24:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:38.275 13:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:24:38.275 13:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:38.534 [2024-07-26 13:24:18.911721] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:38.534 13:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:24:38.534 13:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.534 13:24:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:38.793 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:38.793 [2024-07-26 13:24:19.300512] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9e7080 00:24:38.793 /dev/nbd0 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:39.052 1+0 records in 00:24:39.052 1+0 records out 00:24:39.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226683 s, 18.1 MB/s 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:24:39.052 13:24:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:24:45.621 63488+0 records in 00:24:45.621 63488+0 records out 00:24:45.621 32505856 bytes (33 MB, 31 MiB) copied, 5.68437 s, 5.7 MB/s 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:45.621 [2024-07-26 13:24:25.293337] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:45.621 [2024-07-26 13:24:25.509944] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:45.621 "name": "raid_bdev1", 00:24:45.621 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:45.621 "strip_size_kb": 0, 00:24:45.621 "state": "online", 00:24:45.621 "raid_level": "raid1", 00:24:45.621 "superblock": true, 00:24:45.621 "num_base_bdevs": 4, 00:24:45.621 "num_base_bdevs_discovered": 3, 00:24:45.621 "num_base_bdevs_operational": 3, 00:24:45.621 "base_bdevs_list": [ 00:24:45.621 { 00:24:45.621 "name": null, 00:24:45.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.621 "is_configured": false, 00:24:45.621 "data_offset": 2048, 00:24:45.621 "data_size": 63488 00:24:45.621 }, 00:24:45.621 { 00:24:45.621 "name": "BaseBdev2", 00:24:45.621 "uuid": "ecdbda9d-9cbc-5076-981a-3cd5b87f0287", 00:24:45.621 "is_configured": true, 00:24:45.621 "data_offset": 2048, 00:24:45.621 "data_size": 63488 00:24:45.621 }, 00:24:45.621 { 00:24:45.621 "name": "BaseBdev3", 00:24:45.621 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:45.621 "is_configured": true, 00:24:45.621 "data_offset": 2048, 00:24:45.621 "data_size": 63488 00:24:45.621 }, 00:24:45.621 { 00:24:45.621 "name": "BaseBdev4", 00:24:45.621 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:45.621 "is_configured": true, 00:24:45.621 "data_offset": 2048, 00:24:45.621 "data_size": 63488 00:24:45.621 } 00:24:45.621 ] 00:24:45.621 }' 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:45.621 13:24:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:45.880 13:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:46.138 [2024-07-26 13:24:26.524624] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:46.138 [2024-07-26 13:24:26.528481] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9ea4f0 00:24:46.138 [2024-07-26 13:24:26.530544] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:46.138 13:24:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:47.075 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:47.075 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:47.075 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:47.075 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:47.075 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:47.075 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.075 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.334 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:47.334 "name": "raid_bdev1", 00:24:47.334 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:47.334 "strip_size_kb": 0, 00:24:47.334 "state": "online", 00:24:47.334 "raid_level": "raid1", 00:24:47.334 "superblock": true, 00:24:47.334 "num_base_bdevs": 4, 00:24:47.334 "num_base_bdevs_discovered": 4, 00:24:47.334 "num_base_bdevs_operational": 4, 00:24:47.334 "process": { 00:24:47.334 "type": "rebuild", 00:24:47.334 "target": "spare", 00:24:47.334 "progress": { 00:24:47.334 "blocks": 24576, 00:24:47.334 "percent": 38 00:24:47.334 } 00:24:47.334 }, 00:24:47.334 "base_bdevs_list": [ 00:24:47.334 { 00:24:47.334 "name": "spare", 00:24:47.334 "uuid": "a44eae1e-8b5f-5e8e-bbcd-39aef9494020", 00:24:47.334 "is_configured": true, 00:24:47.334 "data_offset": 2048, 00:24:47.334 "data_size": 63488 00:24:47.334 }, 00:24:47.334 { 00:24:47.334 "name": "BaseBdev2", 00:24:47.334 "uuid": "ecdbda9d-9cbc-5076-981a-3cd5b87f0287", 00:24:47.334 "is_configured": true, 00:24:47.334 "data_offset": 2048, 00:24:47.334 "data_size": 63488 00:24:47.334 }, 00:24:47.334 { 00:24:47.334 "name": "BaseBdev3", 00:24:47.334 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:47.334 "is_configured": true, 00:24:47.334 "data_offset": 2048, 00:24:47.334 "data_size": 63488 00:24:47.334 }, 00:24:47.334 { 00:24:47.334 "name": "BaseBdev4", 00:24:47.334 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:47.334 "is_configured": true, 00:24:47.334 "data_offset": 2048, 00:24:47.334 "data_size": 63488 00:24:47.334 } 00:24:47.334 ] 00:24:47.334 }' 00:24:47.334 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:47.334 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:47.334 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:47.593 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:47.593 13:24:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:47.593 [2024-07-26 13:24:28.075575] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:47.852 [2024-07-26 13:24:28.142282] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:47.852 [2024-07-26 13:24:28.142322] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.852 [2024-07-26 13:24:28.142338] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:47.852 [2024-07-26 13:24:28.142345] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:47.852 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:47.852 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:47.852 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:47.852 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:47.852 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:47.852 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:47.852 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:47.852 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:47.852 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:47.852 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:47.853 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.853 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.111 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.111 "name": "raid_bdev1", 00:24:48.111 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:48.111 "strip_size_kb": 0, 00:24:48.111 "state": "online", 00:24:48.111 "raid_level": "raid1", 00:24:48.111 "superblock": true, 00:24:48.111 "num_base_bdevs": 4, 00:24:48.111 "num_base_bdevs_discovered": 3, 00:24:48.111 "num_base_bdevs_operational": 3, 00:24:48.111 "base_bdevs_list": [ 00:24:48.111 { 00:24:48.111 "name": null, 00:24:48.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.111 "is_configured": false, 00:24:48.111 "data_offset": 2048, 00:24:48.111 "data_size": 63488 00:24:48.111 }, 00:24:48.111 { 00:24:48.111 "name": "BaseBdev2", 00:24:48.111 "uuid": "ecdbda9d-9cbc-5076-981a-3cd5b87f0287", 00:24:48.111 "is_configured": true, 00:24:48.111 "data_offset": 2048, 00:24:48.111 "data_size": 63488 00:24:48.111 }, 00:24:48.111 { 00:24:48.111 "name": "BaseBdev3", 00:24:48.111 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:48.111 "is_configured": true, 00:24:48.111 "data_offset": 2048, 00:24:48.111 "data_size": 63488 00:24:48.111 }, 00:24:48.111 { 00:24:48.111 "name": "BaseBdev4", 00:24:48.111 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:48.111 "is_configured": true, 00:24:48.111 "data_offset": 2048, 00:24:48.111 "data_size": 63488 00:24:48.111 } 00:24:48.111 ] 00:24:48.111 }' 00:24:48.111 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.111 13:24:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.679 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:48.679 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:48.679 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:48.679 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:48.679 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:48.679 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.679 13:24:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.247 13:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:49.247 "name": "raid_bdev1", 00:24:49.247 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:49.247 "strip_size_kb": 0, 00:24:49.247 "state": "online", 00:24:49.247 "raid_level": "raid1", 00:24:49.247 "superblock": true, 00:24:49.247 "num_base_bdevs": 4, 00:24:49.247 "num_base_bdevs_discovered": 3, 00:24:49.247 "num_base_bdevs_operational": 3, 00:24:49.247 "base_bdevs_list": [ 00:24:49.247 { 00:24:49.247 "name": null, 00:24:49.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.247 "is_configured": false, 00:24:49.247 "data_offset": 2048, 00:24:49.247 "data_size": 63488 00:24:49.247 }, 00:24:49.247 { 00:24:49.247 "name": "BaseBdev2", 00:24:49.247 "uuid": "ecdbda9d-9cbc-5076-981a-3cd5b87f0287", 00:24:49.247 "is_configured": true, 00:24:49.247 "data_offset": 2048, 00:24:49.247 "data_size": 63488 00:24:49.247 }, 00:24:49.247 { 00:24:49.247 "name": "BaseBdev3", 00:24:49.247 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:49.247 "is_configured": true, 00:24:49.247 "data_offset": 2048, 00:24:49.247 "data_size": 63488 00:24:49.247 }, 00:24:49.247 { 00:24:49.247 "name": "BaseBdev4", 00:24:49.247 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:49.247 "is_configured": true, 00:24:49.247 "data_offset": 2048, 00:24:49.247 "data_size": 63488 00:24:49.247 } 00:24:49.247 ] 00:24:49.247 }' 00:24:49.247 13:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:49.247 13:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:49.247 13:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:49.247 13:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:49.247 13:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:49.506 [2024-07-26 13:24:29.781906] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:49.506 [2024-07-26 13:24:29.785828] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9ea4f0 00:24:49.506 [2024-07-26 13:24:29.787239] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:49.506 13:24:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:24:50.443 13:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:50.443 13:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:50.443 13:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:50.443 13:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:50.443 13:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:50.443 13:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.443 13:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.702 13:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:50.702 "name": "raid_bdev1", 00:24:50.702 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:50.702 "strip_size_kb": 0, 00:24:50.702 "state": "online", 00:24:50.702 "raid_level": "raid1", 00:24:50.702 "superblock": true, 00:24:50.702 "num_base_bdevs": 4, 00:24:50.702 "num_base_bdevs_discovered": 4, 00:24:50.702 "num_base_bdevs_operational": 4, 00:24:50.702 "process": { 00:24:50.702 "type": "rebuild", 00:24:50.702 "target": "spare", 00:24:50.702 "progress": { 00:24:50.702 "blocks": 22528, 00:24:50.702 "percent": 35 00:24:50.702 } 00:24:50.702 }, 00:24:50.702 "base_bdevs_list": [ 00:24:50.702 { 00:24:50.702 "name": "spare", 00:24:50.702 "uuid": "a44eae1e-8b5f-5e8e-bbcd-39aef9494020", 00:24:50.702 "is_configured": true, 00:24:50.702 "data_offset": 2048, 00:24:50.702 "data_size": 63488 00:24:50.702 }, 00:24:50.702 { 00:24:50.702 "name": "BaseBdev2", 00:24:50.702 "uuid": "ecdbda9d-9cbc-5076-981a-3cd5b87f0287", 00:24:50.702 "is_configured": true, 00:24:50.702 "data_offset": 2048, 00:24:50.702 "data_size": 63488 00:24:50.702 }, 00:24:50.702 { 00:24:50.702 "name": "BaseBdev3", 00:24:50.702 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:50.702 "is_configured": true, 00:24:50.702 "data_offset": 2048, 00:24:50.702 "data_size": 63488 00:24:50.702 }, 00:24:50.702 { 00:24:50.702 "name": "BaseBdev4", 00:24:50.702 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:50.702 "is_configured": true, 00:24:50.702 "data_offset": 2048, 00:24:50.702 "data_size": 63488 00:24:50.702 } 00:24:50.702 ] 00:24:50.702 }' 00:24:50.702 13:24:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:50.702 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:50.702 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:50.702 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:50.702 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:24:50.702 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:24:50.702 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:24:50.702 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:24:50.702 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:24:50.702 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:24:50.702 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:50.961 [2024-07-26 13:24:31.243998] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:50.961 [2024-07-26 13:24:31.398528] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x9ea4f0 00:24:50.961 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:24:50.961 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:24:50.961 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:50.961 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:50.961 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:50.961 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:50.961 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:50.961 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.961 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.220 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:51.220 "name": "raid_bdev1", 00:24:51.220 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:51.220 "strip_size_kb": 0, 00:24:51.220 "state": "online", 00:24:51.220 "raid_level": "raid1", 00:24:51.220 "superblock": true, 00:24:51.220 "num_base_bdevs": 4, 00:24:51.221 "num_base_bdevs_discovered": 3, 00:24:51.221 "num_base_bdevs_operational": 3, 00:24:51.221 "process": { 00:24:51.221 "type": "rebuild", 00:24:51.221 "target": "spare", 00:24:51.221 "progress": { 00:24:51.221 "blocks": 34816, 00:24:51.221 "percent": 54 00:24:51.221 } 00:24:51.221 }, 00:24:51.221 "base_bdevs_list": [ 00:24:51.221 { 00:24:51.221 "name": "spare", 00:24:51.221 "uuid": "a44eae1e-8b5f-5e8e-bbcd-39aef9494020", 00:24:51.221 "is_configured": true, 00:24:51.221 "data_offset": 2048, 00:24:51.221 "data_size": 63488 00:24:51.221 }, 00:24:51.221 { 00:24:51.221 "name": null, 00:24:51.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.221 "is_configured": false, 00:24:51.221 "data_offset": 2048, 00:24:51.221 "data_size": 63488 00:24:51.221 }, 00:24:51.221 { 00:24:51.221 "name": "BaseBdev3", 00:24:51.221 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:51.221 "is_configured": true, 00:24:51.221 "data_offset": 2048, 00:24:51.221 "data_size": 63488 00:24:51.221 }, 00:24:51.221 { 00:24:51.221 "name": "BaseBdev4", 00:24:51.221 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:51.221 "is_configured": true, 00:24:51.221 "data_offset": 2048, 00:24:51.221 "data_size": 63488 00:24:51.221 } 00:24:51.221 ] 00:24:51.221 }' 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=857 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.221 13:24:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.789 13:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:51.789 "name": "raid_bdev1", 00:24:51.789 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:51.789 "strip_size_kb": 0, 00:24:51.789 "state": "online", 00:24:51.789 "raid_level": "raid1", 00:24:51.789 "superblock": true, 00:24:51.789 "num_base_bdevs": 4, 00:24:51.789 "num_base_bdevs_discovered": 3, 00:24:51.789 "num_base_bdevs_operational": 3, 00:24:51.789 "process": { 00:24:51.789 "type": "rebuild", 00:24:51.789 "target": "spare", 00:24:51.789 "progress": { 00:24:51.789 "blocks": 45056, 00:24:51.789 "percent": 70 00:24:51.789 } 00:24:51.789 }, 00:24:51.789 "base_bdevs_list": [ 00:24:51.789 { 00:24:51.789 "name": "spare", 00:24:51.789 "uuid": "a44eae1e-8b5f-5e8e-bbcd-39aef9494020", 00:24:51.789 "is_configured": true, 00:24:51.789 "data_offset": 2048, 00:24:51.789 "data_size": 63488 00:24:51.789 }, 00:24:51.789 { 00:24:51.789 "name": null, 00:24:51.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.789 "is_configured": false, 00:24:51.789 "data_offset": 2048, 00:24:51.789 "data_size": 63488 00:24:51.789 }, 00:24:51.789 { 00:24:51.789 "name": "BaseBdev3", 00:24:51.789 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:51.789 "is_configured": true, 00:24:51.789 "data_offset": 2048, 00:24:51.789 "data_size": 63488 00:24:51.789 }, 00:24:51.789 { 00:24:51.789 "name": "BaseBdev4", 00:24:51.789 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:51.789 "is_configured": true, 00:24:51.789 "data_offset": 2048, 00:24:51.789 "data_size": 63488 00:24:51.789 } 00:24:51.789 ] 00:24:51.789 }' 00:24:51.789 13:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:51.789 13:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:51.789 13:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:51.789 13:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:24:51.789 13:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:24:52.724 [2024-07-26 13:24:33.010216] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:52.725 [2024-07-26 13:24:33.010273] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:52.725 [2024-07-26 13:24:33.010362] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:52.983 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:24:52.983 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:52.983 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:52.983 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:24:52.983 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:24:52.983 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:52.983 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.983 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:53.242 "name": "raid_bdev1", 00:24:53.242 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:53.242 "strip_size_kb": 0, 00:24:53.242 "state": "online", 00:24:53.242 "raid_level": "raid1", 00:24:53.242 "superblock": true, 00:24:53.242 "num_base_bdevs": 4, 00:24:53.242 "num_base_bdevs_discovered": 3, 00:24:53.242 "num_base_bdevs_operational": 3, 00:24:53.242 "base_bdevs_list": [ 00:24:53.242 { 00:24:53.242 "name": "spare", 00:24:53.242 "uuid": "a44eae1e-8b5f-5e8e-bbcd-39aef9494020", 00:24:53.242 "is_configured": true, 00:24:53.242 "data_offset": 2048, 00:24:53.242 "data_size": 63488 00:24:53.242 }, 00:24:53.242 { 00:24:53.242 "name": null, 00:24:53.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.242 "is_configured": false, 00:24:53.242 "data_offset": 2048, 00:24:53.242 "data_size": 63488 00:24:53.242 }, 00:24:53.242 { 00:24:53.242 "name": "BaseBdev3", 00:24:53.242 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:53.242 "is_configured": true, 00:24:53.242 "data_offset": 2048, 00:24:53.242 "data_size": 63488 00:24:53.242 }, 00:24:53.242 { 00:24:53.242 "name": "BaseBdev4", 00:24:53.242 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:53.242 "is_configured": true, 00:24:53.242 "data_offset": 2048, 00:24:53.242 "data_size": 63488 00:24:53.242 } 00:24:53.242 ] 00:24:53.242 }' 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.242 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.501 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:53.501 "name": "raid_bdev1", 00:24:53.501 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:53.501 "strip_size_kb": 0, 00:24:53.501 "state": "online", 00:24:53.501 "raid_level": "raid1", 00:24:53.501 "superblock": true, 00:24:53.501 "num_base_bdevs": 4, 00:24:53.501 "num_base_bdevs_discovered": 3, 00:24:53.501 "num_base_bdevs_operational": 3, 00:24:53.501 "base_bdevs_list": [ 00:24:53.501 { 00:24:53.501 "name": "spare", 00:24:53.501 "uuid": "a44eae1e-8b5f-5e8e-bbcd-39aef9494020", 00:24:53.501 "is_configured": true, 00:24:53.501 "data_offset": 2048, 00:24:53.501 "data_size": 63488 00:24:53.501 }, 00:24:53.501 { 00:24:53.501 "name": null, 00:24:53.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.501 "is_configured": false, 00:24:53.501 "data_offset": 2048, 00:24:53.501 "data_size": 63488 00:24:53.501 }, 00:24:53.501 { 00:24:53.501 "name": "BaseBdev3", 00:24:53.501 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:53.501 "is_configured": true, 00:24:53.501 "data_offset": 2048, 00:24:53.501 "data_size": 63488 00:24:53.501 }, 00:24:53.501 { 00:24:53.501 "name": "BaseBdev4", 00:24:53.501 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:53.501 "is_configured": true, 00:24:53.501 "data_offset": 2048, 00:24:53.501 "data_size": 63488 00:24:53.501 } 00:24:53.501 ] 00:24:53.501 }' 00:24:53.501 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:53.501 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:53.501 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:53.501 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:53.501 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:53.502 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:53.502 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:53.502 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:53.502 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:53.502 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:53.502 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:53.502 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:53.502 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:53.502 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:53.502 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.502 13:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.070 13:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:54.070 "name": "raid_bdev1", 00:24:54.070 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:54.070 "strip_size_kb": 0, 00:24:54.070 "state": "online", 00:24:54.070 "raid_level": "raid1", 00:24:54.070 "superblock": true, 00:24:54.070 "num_base_bdevs": 4, 00:24:54.070 "num_base_bdevs_discovered": 3, 00:24:54.070 "num_base_bdevs_operational": 3, 00:24:54.070 "base_bdevs_list": [ 00:24:54.070 { 00:24:54.070 "name": "spare", 00:24:54.070 "uuid": "a44eae1e-8b5f-5e8e-bbcd-39aef9494020", 00:24:54.070 "is_configured": true, 00:24:54.070 "data_offset": 2048, 00:24:54.070 "data_size": 63488 00:24:54.070 }, 00:24:54.070 { 00:24:54.070 "name": null, 00:24:54.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.070 "is_configured": false, 00:24:54.070 "data_offset": 2048, 00:24:54.070 "data_size": 63488 00:24:54.070 }, 00:24:54.070 { 00:24:54.070 "name": "BaseBdev3", 00:24:54.070 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:54.070 "is_configured": true, 00:24:54.070 "data_offset": 2048, 00:24:54.070 "data_size": 63488 00:24:54.070 }, 00:24:54.070 { 00:24:54.070 "name": "BaseBdev4", 00:24:54.070 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:54.070 "is_configured": true, 00:24:54.070 "data_offset": 2048, 00:24:54.070 "data_size": 63488 00:24:54.070 } 00:24:54.070 ] 00:24:54.070 }' 00:24:54.070 13:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:54.070 13:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:54.674 13:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:54.674 [2024-07-26 13:24:35.116214] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:54.674 [2024-07-26 13:24:35.116239] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:54.674 [2024-07-26 13:24:35.116291] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:54.674 [2024-07-26 13:24:35.116354] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:54.674 [2024-07-26 13:24:35.116365] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9e75b0 name raid_bdev1, state offline 00:24:54.674 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.674 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:54.956 /dev/nbd0 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:54.956 1+0 records in 00:24:54.956 1+0 records out 00:24:54.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229004 s, 17.9 MB/s 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:24:54.956 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:55.216 /dev/nbd1 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:55.216 1+0 records in 00:24:55.216 1+0 records out 00:24:55.216 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298309 s, 13.7 MB/s 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:55.216 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:55.476 13:24:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:55.735 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:55.735 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:55.735 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:55.735 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:55.735 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:55.735 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:55.735 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:24:55.735 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:24:55.735 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:24:55.736 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:55.994 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:56.253 [2024-07-26 13:24:36.680762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:56.253 [2024-07-26 13:24:36.680805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.253 [2024-07-26 13:24:36.680824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9e8010 00:24:56.253 [2024-07-26 13:24:36.680835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.253 [2024-07-26 13:24:36.682376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.253 [2024-07-26 13:24:36.682405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:56.253 [2024-07-26 13:24:36.682479] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:56.253 [2024-07-26 13:24:36.682506] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:56.253 [2024-07-26 13:24:36.682599] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:56.253 [2024-07-26 13:24:36.682666] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:56.253 spare 00:24:56.253 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:24:56.253 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:56.253 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:56.253 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:56.253 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:56.253 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:56.253 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:56.253 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:56.253 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:56.253 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:56.253 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.253 13:24:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:56.512 [2024-07-26 13:24:36.782977] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x9e68f0 00:24:56.512 [2024-07-26 13:24:36.782993] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:24:56.512 [2024-07-26 13:24:36.783183] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9e65b0 00:24:56.512 [2024-07-26 13:24:36.783330] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9e68f0 00:24:56.512 [2024-07-26 13:24:36.783339] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9e68f0 00:24:56.512 [2024-07-26 13:24:36.783437] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:56.771 13:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:56.771 "name": "raid_bdev1", 00:24:56.771 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:56.771 "strip_size_kb": 0, 00:24:56.771 "state": "online", 00:24:56.771 "raid_level": "raid1", 00:24:56.771 "superblock": true, 00:24:56.771 "num_base_bdevs": 4, 00:24:56.771 "num_base_bdevs_discovered": 3, 00:24:56.771 "num_base_bdevs_operational": 3, 00:24:56.771 "base_bdevs_list": [ 00:24:56.771 { 00:24:56.771 "name": "spare", 00:24:56.771 "uuid": "a44eae1e-8b5f-5e8e-bbcd-39aef9494020", 00:24:56.771 "is_configured": true, 00:24:56.771 "data_offset": 2048, 00:24:56.771 "data_size": 63488 00:24:56.771 }, 00:24:56.771 { 00:24:56.771 "name": null, 00:24:56.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:56.771 "is_configured": false, 00:24:56.771 "data_offset": 2048, 00:24:56.771 "data_size": 63488 00:24:56.771 }, 00:24:56.771 { 00:24:56.771 "name": "BaseBdev3", 00:24:56.771 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:56.771 "is_configured": true, 00:24:56.771 "data_offset": 2048, 00:24:56.771 "data_size": 63488 00:24:56.771 }, 00:24:56.771 { 00:24:56.771 "name": "BaseBdev4", 00:24:56.771 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:56.771 "is_configured": true, 00:24:56.771 "data_offset": 2048, 00:24:56.772 "data_size": 63488 00:24:56.772 } 00:24:56.772 ] 00:24:56.772 }' 00:24:56.772 13:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:56.772 13:24:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:57.339 13:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:57.339 13:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:24:57.339 13:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:24:57.339 13:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:24:57.339 13:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:24:57.339 13:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.339 13:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.599 13:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:57.599 "name": "raid_bdev1", 00:24:57.599 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:57.599 "strip_size_kb": 0, 00:24:57.599 "state": "online", 00:24:57.599 "raid_level": "raid1", 00:24:57.599 "superblock": true, 00:24:57.599 "num_base_bdevs": 4, 00:24:57.599 "num_base_bdevs_discovered": 3, 00:24:57.599 "num_base_bdevs_operational": 3, 00:24:57.599 "base_bdevs_list": [ 00:24:57.599 { 00:24:57.599 "name": "spare", 00:24:57.599 "uuid": "a44eae1e-8b5f-5e8e-bbcd-39aef9494020", 00:24:57.599 "is_configured": true, 00:24:57.599 "data_offset": 2048, 00:24:57.599 "data_size": 63488 00:24:57.599 }, 00:24:57.599 { 00:24:57.599 "name": null, 00:24:57.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.599 "is_configured": false, 00:24:57.599 "data_offset": 2048, 00:24:57.599 "data_size": 63488 00:24:57.599 }, 00:24:57.599 { 00:24:57.599 "name": "BaseBdev3", 00:24:57.599 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:57.599 "is_configured": true, 00:24:57.599 "data_offset": 2048, 00:24:57.599 "data_size": 63488 00:24:57.599 }, 00:24:57.599 { 00:24:57.599 "name": "BaseBdev4", 00:24:57.599 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:57.599 "is_configured": true, 00:24:57.599 "data_offset": 2048, 00:24:57.599 "data_size": 63488 00:24:57.599 } 00:24:57.599 ] 00:24:57.599 }' 00:24:57.599 13:24:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:24:57.599 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:24:57.599 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:24:57.599 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:57.599 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.599 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:57.858 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:24:57.858 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:58.117 [2024-07-26 13:24:38.501747] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:58.117 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:58.117 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:58.117 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:58.117 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:58.117 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:58.117 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:24:58.117 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:58.117 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:58.117 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:58.117 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:58.117 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.117 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.376 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:58.376 "name": "raid_bdev1", 00:24:58.376 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:24:58.376 "strip_size_kb": 0, 00:24:58.376 "state": "online", 00:24:58.376 "raid_level": "raid1", 00:24:58.376 "superblock": true, 00:24:58.376 "num_base_bdevs": 4, 00:24:58.376 "num_base_bdevs_discovered": 2, 00:24:58.376 "num_base_bdevs_operational": 2, 00:24:58.376 "base_bdevs_list": [ 00:24:58.376 { 00:24:58.376 "name": null, 00:24:58.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.376 "is_configured": false, 00:24:58.376 "data_offset": 2048, 00:24:58.376 "data_size": 63488 00:24:58.376 }, 00:24:58.376 { 00:24:58.376 "name": null, 00:24:58.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.376 "is_configured": false, 00:24:58.376 "data_offset": 2048, 00:24:58.376 "data_size": 63488 00:24:58.376 }, 00:24:58.377 { 00:24:58.377 "name": "BaseBdev3", 00:24:58.377 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:24:58.377 "is_configured": true, 00:24:58.377 "data_offset": 2048, 00:24:58.377 "data_size": 63488 00:24:58.377 }, 00:24:58.377 { 00:24:58.377 "name": "BaseBdev4", 00:24:58.377 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:24:58.377 "is_configured": true, 00:24:58.377 "data_offset": 2048, 00:24:58.377 "data_size": 63488 00:24:58.377 } 00:24:58.377 ] 00:24:58.377 }' 00:24:58.377 13:24:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:58.377 13:24:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:58.945 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:59.205 [2024-07-26 13:24:39.532487] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:59.205 [2024-07-26 13:24:39.532624] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:24:59.205 [2024-07-26 13:24:39.532639] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:59.205 [2024-07-26 13:24:39.532665] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:59.205 [2024-07-26 13:24:39.536411] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9e7fc0 00:24:59.205 [2024-07-26 13:24:39.538473] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:59.205 13:24:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:25:00.143 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:00.143 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:00.143 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:00.143 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:00.143 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:00.143 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.143 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.402 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:00.402 "name": "raid_bdev1", 00:25:00.402 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:25:00.402 "strip_size_kb": 0, 00:25:00.402 "state": "online", 00:25:00.402 "raid_level": "raid1", 00:25:00.402 "superblock": true, 00:25:00.402 "num_base_bdevs": 4, 00:25:00.402 "num_base_bdevs_discovered": 3, 00:25:00.402 "num_base_bdevs_operational": 3, 00:25:00.402 "process": { 00:25:00.402 "type": "rebuild", 00:25:00.402 "target": "spare", 00:25:00.402 "progress": { 00:25:00.402 "blocks": 24576, 00:25:00.402 "percent": 38 00:25:00.402 } 00:25:00.402 }, 00:25:00.402 "base_bdevs_list": [ 00:25:00.402 { 00:25:00.402 "name": "spare", 00:25:00.402 "uuid": "a44eae1e-8b5f-5e8e-bbcd-39aef9494020", 00:25:00.402 "is_configured": true, 00:25:00.402 "data_offset": 2048, 00:25:00.402 "data_size": 63488 00:25:00.402 }, 00:25:00.402 { 00:25:00.402 "name": null, 00:25:00.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:00.402 "is_configured": false, 00:25:00.402 "data_offset": 2048, 00:25:00.402 "data_size": 63488 00:25:00.402 }, 00:25:00.402 { 00:25:00.402 "name": "BaseBdev3", 00:25:00.402 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:25:00.402 "is_configured": true, 00:25:00.402 "data_offset": 2048, 00:25:00.402 "data_size": 63488 00:25:00.402 }, 00:25:00.402 { 00:25:00.402 "name": "BaseBdev4", 00:25:00.402 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:25:00.402 "is_configured": true, 00:25:00.402 "data_offset": 2048, 00:25:00.402 "data_size": 63488 00:25:00.402 } 00:25:00.402 ] 00:25:00.402 }' 00:25:00.402 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:00.402 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:00.402 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:00.662 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:00.662 13:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:00.662 [2024-07-26 13:24:41.079430] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:00.662 [2024-07-26 13:24:41.150112] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:00.662 [2024-07-26 13:24:41.150159] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.662 [2024-07-26 13:24:41.150175] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:00.662 [2024-07-26 13:24:41.150183] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:00.662 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:00.662 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:00.662 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:00.662 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:00.662 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:00.662 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:00.662 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.662 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.662 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.662 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.662 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.662 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.231 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:01.231 "name": "raid_bdev1", 00:25:01.231 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:25:01.231 "strip_size_kb": 0, 00:25:01.231 "state": "online", 00:25:01.231 "raid_level": "raid1", 00:25:01.231 "superblock": true, 00:25:01.231 "num_base_bdevs": 4, 00:25:01.231 "num_base_bdevs_discovered": 2, 00:25:01.231 "num_base_bdevs_operational": 2, 00:25:01.231 "base_bdevs_list": [ 00:25:01.231 { 00:25:01.231 "name": null, 00:25:01.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.231 "is_configured": false, 00:25:01.231 "data_offset": 2048, 00:25:01.231 "data_size": 63488 00:25:01.231 }, 00:25:01.231 { 00:25:01.231 "name": null, 00:25:01.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.231 "is_configured": false, 00:25:01.231 "data_offset": 2048, 00:25:01.231 "data_size": 63488 00:25:01.231 }, 00:25:01.231 { 00:25:01.231 "name": "BaseBdev3", 00:25:01.231 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:25:01.231 "is_configured": true, 00:25:01.231 "data_offset": 2048, 00:25:01.231 "data_size": 63488 00:25:01.231 }, 00:25:01.231 { 00:25:01.231 "name": "BaseBdev4", 00:25:01.231 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:25:01.231 "is_configured": true, 00:25:01.231 "data_offset": 2048, 00:25:01.231 "data_size": 63488 00:25:01.231 } 00:25:01.231 ] 00:25:01.231 }' 00:25:01.231 13:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:01.231 13:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:01.800 13:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:02.060 [2024-07-26 13:24:42.421341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:02.060 [2024-07-26 13:24:42.421389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:02.060 [2024-07-26 13:24:42.421409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9e7d80 00:25:02.060 [2024-07-26 13:24:42.421420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:02.060 [2024-07-26 13:24:42.421765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:02.060 [2024-07-26 13:24:42.421781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:02.060 [2024-07-26 13:24:42.421853] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:02.060 [2024-07-26 13:24:42.421864] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:02.060 [2024-07-26 13:24:42.421874] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:02.060 [2024-07-26 13:24:42.421892] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:02.060 [2024-07-26 13:24:42.425677] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa82950 00:25:02.060 spare 00:25:02.060 [2024-07-26 13:24:42.427054] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:02.060 13:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:25:02.997 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:02.997 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:02.997 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:02.997 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:02.997 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:02.997 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.997 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.256 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:03.256 "name": "raid_bdev1", 00:25:03.256 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:25:03.256 "strip_size_kb": 0, 00:25:03.256 "state": "online", 00:25:03.256 "raid_level": "raid1", 00:25:03.256 "superblock": true, 00:25:03.256 "num_base_bdevs": 4, 00:25:03.256 "num_base_bdevs_discovered": 3, 00:25:03.256 "num_base_bdevs_operational": 3, 00:25:03.256 "process": { 00:25:03.256 "type": "rebuild", 00:25:03.256 "target": "spare", 00:25:03.256 "progress": { 00:25:03.256 "blocks": 22528, 00:25:03.256 "percent": 35 00:25:03.256 } 00:25:03.256 }, 00:25:03.256 "base_bdevs_list": [ 00:25:03.256 { 00:25:03.256 "name": "spare", 00:25:03.256 "uuid": "a44eae1e-8b5f-5e8e-bbcd-39aef9494020", 00:25:03.256 "is_configured": true, 00:25:03.256 "data_offset": 2048, 00:25:03.256 "data_size": 63488 00:25:03.256 }, 00:25:03.256 { 00:25:03.256 "name": null, 00:25:03.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.256 "is_configured": false, 00:25:03.256 "data_offset": 2048, 00:25:03.256 "data_size": 63488 00:25:03.256 }, 00:25:03.256 { 00:25:03.256 "name": "BaseBdev3", 00:25:03.256 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:25:03.256 "is_configured": true, 00:25:03.256 "data_offset": 2048, 00:25:03.256 "data_size": 63488 00:25:03.256 }, 00:25:03.256 { 00:25:03.256 "name": "BaseBdev4", 00:25:03.256 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:25:03.256 "is_configured": true, 00:25:03.256 "data_offset": 2048, 00:25:03.256 "data_size": 63488 00:25:03.256 } 00:25:03.256 ] 00:25:03.256 }' 00:25:03.256 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:03.256 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:03.256 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:03.256 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:03.256 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:03.516 [2024-07-26 13:24:43.918580] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:03.516 [2024-07-26 13:24:43.938051] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:03.516 [2024-07-26 13:24:43.938091] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.516 [2024-07-26 13:24:43.938106] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:03.516 [2024-07-26 13:24:43.938114] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:03.516 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:03.516 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:03.516 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:03.516 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:03.516 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:03.516 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:03.516 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:03.516 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:03.516 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:03.516 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:03.516 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.516 13:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.775 13:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:03.775 "name": "raid_bdev1", 00:25:03.775 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:25:03.775 "strip_size_kb": 0, 00:25:03.775 "state": "online", 00:25:03.775 "raid_level": "raid1", 00:25:03.775 "superblock": true, 00:25:03.775 "num_base_bdevs": 4, 00:25:03.775 "num_base_bdevs_discovered": 2, 00:25:03.775 "num_base_bdevs_operational": 2, 00:25:03.775 "base_bdevs_list": [ 00:25:03.775 { 00:25:03.775 "name": null, 00:25:03.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.775 "is_configured": false, 00:25:03.775 "data_offset": 2048, 00:25:03.775 "data_size": 63488 00:25:03.775 }, 00:25:03.775 { 00:25:03.775 "name": null, 00:25:03.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.775 "is_configured": false, 00:25:03.775 "data_offset": 2048, 00:25:03.775 "data_size": 63488 00:25:03.775 }, 00:25:03.775 { 00:25:03.775 "name": "BaseBdev3", 00:25:03.775 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:25:03.775 "is_configured": true, 00:25:03.775 "data_offset": 2048, 00:25:03.775 "data_size": 63488 00:25:03.775 }, 00:25:03.775 { 00:25:03.775 "name": "BaseBdev4", 00:25:03.775 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:25:03.775 "is_configured": true, 00:25:03.775 "data_offset": 2048, 00:25:03.775 "data_size": 63488 00:25:03.775 } 00:25:03.775 ] 00:25:03.775 }' 00:25:03.775 13:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:03.775 13:24:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:04.344 13:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:04.344 13:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:04.344 13:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:04.344 13:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:04.344 13:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:04.344 13:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.344 13:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:04.913 13:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:04.913 "name": "raid_bdev1", 00:25:04.913 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:25:04.913 "strip_size_kb": 0, 00:25:04.913 "state": "online", 00:25:04.913 "raid_level": "raid1", 00:25:04.913 "superblock": true, 00:25:04.913 "num_base_bdevs": 4, 00:25:04.913 "num_base_bdevs_discovered": 2, 00:25:04.913 "num_base_bdevs_operational": 2, 00:25:04.913 "base_bdevs_list": [ 00:25:04.913 { 00:25:04.913 "name": null, 00:25:04.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.913 "is_configured": false, 00:25:04.913 "data_offset": 2048, 00:25:04.913 "data_size": 63488 00:25:04.913 }, 00:25:04.913 { 00:25:04.913 "name": null, 00:25:04.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:04.913 "is_configured": false, 00:25:04.913 "data_offset": 2048, 00:25:04.913 "data_size": 63488 00:25:04.913 }, 00:25:04.913 { 00:25:04.913 "name": "BaseBdev3", 00:25:04.913 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:25:04.913 "is_configured": true, 00:25:04.913 "data_offset": 2048, 00:25:04.913 "data_size": 63488 00:25:04.913 }, 00:25:04.913 { 00:25:04.913 "name": "BaseBdev4", 00:25:04.913 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:25:04.913 "is_configured": true, 00:25:04.913 "data_offset": 2048, 00:25:04.913 "data_size": 63488 00:25:04.913 } 00:25:04.913 ] 00:25:04.913 }' 00:25:04.913 13:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:04.913 13:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:04.913 13:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:04.913 13:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:04.913 13:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:05.172 13:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:05.431 [2024-07-26 13:24:45.710622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:05.431 [2024-07-26 13:24:45.710664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.431 [2024-07-26 13:24:45.710686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa81d60 00:25:05.431 [2024-07-26 13:24:45.710697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.431 [2024-07-26 13:24:45.711014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.431 [2024-07-26 13:24:45.711030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:05.431 [2024-07-26 13:24:45.711088] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:05.431 [2024-07-26 13:24:45.711099] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:05.431 [2024-07-26 13:24:45.711109] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:05.431 BaseBdev1 00:25:05.431 13:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.370 "name": "raid_bdev1", 00:25:06.370 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:25:06.370 "strip_size_kb": 0, 00:25:06.370 "state": "online", 00:25:06.370 "raid_level": "raid1", 00:25:06.370 "superblock": true, 00:25:06.370 "num_base_bdevs": 4, 00:25:06.370 "num_base_bdevs_discovered": 2, 00:25:06.370 "num_base_bdevs_operational": 2, 00:25:06.370 "base_bdevs_list": [ 00:25:06.370 { 00:25:06.370 "name": null, 00:25:06.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.370 "is_configured": false, 00:25:06.370 "data_offset": 2048, 00:25:06.370 "data_size": 63488 00:25:06.370 }, 00:25:06.370 { 00:25:06.370 "name": null, 00:25:06.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.370 "is_configured": false, 00:25:06.370 "data_offset": 2048, 00:25:06.370 "data_size": 63488 00:25:06.370 }, 00:25:06.370 { 00:25:06.370 "name": "BaseBdev3", 00:25:06.370 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:25:06.370 "is_configured": true, 00:25:06.370 "data_offset": 2048, 00:25:06.370 "data_size": 63488 00:25:06.370 }, 00:25:06.370 { 00:25:06.370 "name": "BaseBdev4", 00:25:06.370 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:25:06.370 "is_configured": true, 00:25:06.370 "data_offset": 2048, 00:25:06.370 "data_size": 63488 00:25:06.370 } 00:25:06.370 ] 00:25:06.370 }' 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.370 13:24:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:07.309 "name": "raid_bdev1", 00:25:07.309 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:25:07.309 "strip_size_kb": 0, 00:25:07.309 "state": "online", 00:25:07.309 "raid_level": "raid1", 00:25:07.309 "superblock": true, 00:25:07.309 "num_base_bdevs": 4, 00:25:07.309 "num_base_bdevs_discovered": 2, 00:25:07.309 "num_base_bdevs_operational": 2, 00:25:07.309 "base_bdevs_list": [ 00:25:07.309 { 00:25:07.309 "name": null, 00:25:07.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.309 "is_configured": false, 00:25:07.309 "data_offset": 2048, 00:25:07.309 "data_size": 63488 00:25:07.309 }, 00:25:07.309 { 00:25:07.309 "name": null, 00:25:07.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.309 "is_configured": false, 00:25:07.309 "data_offset": 2048, 00:25:07.309 "data_size": 63488 00:25:07.309 }, 00:25:07.309 { 00:25:07.309 "name": "BaseBdev3", 00:25:07.309 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:25:07.309 "is_configured": true, 00:25:07.309 "data_offset": 2048, 00:25:07.309 "data_size": 63488 00:25:07.309 }, 00:25:07.309 { 00:25:07.309 "name": "BaseBdev4", 00:25:07.309 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:25:07.309 "is_configured": true, 00:25:07.309 "data_offset": 2048, 00:25:07.309 "data_size": 63488 00:25:07.309 } 00:25:07.309 ] 00:25:07.309 }' 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:07.309 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:07.568 [2024-07-26 13:24:47.976646] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:07.568 [2024-07-26 13:24:47.976757] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:07.568 [2024-07-26 13:24:47.976771] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:07.568 request: 00:25:07.568 { 00:25:07.568 "base_bdev": "BaseBdev1", 00:25:07.568 "raid_bdev": "raid_bdev1", 00:25:07.568 "method": "bdev_raid_add_base_bdev", 00:25:07.568 "req_id": 1 00:25:07.568 } 00:25:07.568 Got JSON-RPC error response 00:25:07.568 response: 00:25:07.568 { 00:25:07.568 "code": -22, 00:25:07.568 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:07.568 } 00:25:07.568 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:25:07.568 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:07.568 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:07.568 13:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:07.568 13:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:25:08.557 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:08.557 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:08.557 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:08.557 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:08.557 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:08.557 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:08.557 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:08.557 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:08.557 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:08.557 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:08.557 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.557 13:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.831 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:08.831 "name": "raid_bdev1", 00:25:08.831 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:25:08.831 "strip_size_kb": 0, 00:25:08.831 "state": "online", 00:25:08.831 "raid_level": "raid1", 00:25:08.831 "superblock": true, 00:25:08.831 "num_base_bdevs": 4, 00:25:08.831 "num_base_bdevs_discovered": 2, 00:25:08.831 "num_base_bdevs_operational": 2, 00:25:08.831 "base_bdevs_list": [ 00:25:08.831 { 00:25:08.831 "name": null, 00:25:08.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.831 "is_configured": false, 00:25:08.831 "data_offset": 2048, 00:25:08.831 "data_size": 63488 00:25:08.831 }, 00:25:08.831 { 00:25:08.831 "name": null, 00:25:08.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.831 "is_configured": false, 00:25:08.831 "data_offset": 2048, 00:25:08.831 "data_size": 63488 00:25:08.831 }, 00:25:08.831 { 00:25:08.831 "name": "BaseBdev3", 00:25:08.831 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:25:08.831 "is_configured": true, 00:25:08.831 "data_offset": 2048, 00:25:08.831 "data_size": 63488 00:25:08.831 }, 00:25:08.831 { 00:25:08.831 "name": "BaseBdev4", 00:25:08.831 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:25:08.831 "is_configured": true, 00:25:08.831 "data_offset": 2048, 00:25:08.831 "data_size": 63488 00:25:08.831 } 00:25:08.831 ] 00:25:08.831 }' 00:25:08.831 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:08.831 13:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.399 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:09.399 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:09.400 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:09.400 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:09.400 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:09.400 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.400 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.400 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:09.400 "name": "raid_bdev1", 00:25:09.400 "uuid": "c8124a30-b7c6-4d8b-8586-421a4b07681f", 00:25:09.400 "strip_size_kb": 0, 00:25:09.400 "state": "online", 00:25:09.400 "raid_level": "raid1", 00:25:09.400 "superblock": true, 00:25:09.400 "num_base_bdevs": 4, 00:25:09.400 "num_base_bdevs_discovered": 2, 00:25:09.400 "num_base_bdevs_operational": 2, 00:25:09.400 "base_bdevs_list": [ 00:25:09.400 { 00:25:09.400 "name": null, 00:25:09.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.400 "is_configured": false, 00:25:09.400 "data_offset": 2048, 00:25:09.400 "data_size": 63488 00:25:09.400 }, 00:25:09.400 { 00:25:09.400 "name": null, 00:25:09.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.400 "is_configured": false, 00:25:09.400 "data_offset": 2048, 00:25:09.400 "data_size": 63488 00:25:09.400 }, 00:25:09.400 { 00:25:09.400 "name": "BaseBdev3", 00:25:09.400 "uuid": "a112e239-50b6-56b5-8308-2133cbdf5ffe", 00:25:09.400 "is_configured": true, 00:25:09.400 "data_offset": 2048, 00:25:09.400 "data_size": 63488 00:25:09.400 }, 00:25:09.400 { 00:25:09.400 "name": "BaseBdev4", 00:25:09.400 "uuid": "d8418e6c-dbca-5408-9fc8-5bae561898f1", 00:25:09.400 "is_configured": true, 00:25:09.400 "data_offset": 2048, 00:25:09.400 "data_size": 63488 00:25:09.400 } 00:25:09.400 ] 00:25:09.400 }' 00:25:09.400 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:09.659 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:09.659 13:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:09.659 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:09.659 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 803834 00:25:09.659 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 803834 ']' 00:25:09.659 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 803834 00:25:09.659 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:25:09.659 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:09.659 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 803834 00:25:09.659 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:09.659 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:09.659 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 803834' 00:25:09.659 killing process with pid 803834 00:25:09.659 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 803834 00:25:09.659 Received shutdown signal, test time was about 60.000000 seconds 00:25:09.659 00:25:09.659 Latency(us) 00:25:09.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.659 =================================================================================================================== 00:25:09.659 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:09.659 [2024-07-26 13:24:50.099990] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:09.659 [2024-07-26 13:24:50.100077] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:09.659 [2024-07-26 13:24:50.100128] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:09.659 [2024-07-26 13:24:50.100147] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9e68f0 name raid_bdev1, state offline 00:25:09.659 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 803834 00:25:09.659 [2024-07-26 13:24:50.140008] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:09.919 13:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:25:09.919 00:25:09.919 real 0m36.034s 00:25:09.919 user 0m53.424s 00:25:09.919 sys 0m5.823s 00:25:09.919 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.920 ************************************ 00:25:09.920 END TEST raid_rebuild_test_sb 00:25:09.920 ************************************ 00:25:09.920 13:24:50 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:25:09.920 13:24:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:09.920 13:24:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:09.920 13:24:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:09.920 ************************************ 00:25:09.920 START TEST raid_rebuild_test_io 00:25:09.920 ************************************ 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=810242 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 810242 /var/tmp/spdk-raid.sock 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 810242 ']' 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:09.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:09.920 13:24:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:10.180 [2024-07-26 13:24:50.487820] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:25:10.180 [2024-07-26 13:24:50.487881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810242 ] 00:25:10.180 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:10.180 Zero copy mechanism will not be used. 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:10.180 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.180 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:10.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.181 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:10.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.181 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:10.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.181 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:10.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.181 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:10.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.181 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:10.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:10.181 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:10.181 [2024-07-26 13:24:50.620712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.181 [2024-07-26 13:24:50.701994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.440 [2024-07-26 13:24:50.766994] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:10.440 [2024-07-26 13:24:50.767030] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:11.008 13:24:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:11.008 13:24:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:25:11.008 13:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:11.008 13:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:11.267 BaseBdev1_malloc 00:25:11.267 13:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:11.267 [2024-07-26 13:24:51.756934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:11.267 [2024-07-26 13:24:51.756980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.267 [2024-07-26 13:24:51.756999] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25605f0 00:25:11.267 [2024-07-26 13:24:51.757010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.267 [2024-07-26 13:24:51.758437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.267 [2024-07-26 13:24:51.758467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:11.267 BaseBdev1 00:25:11.267 13:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:11.267 13:24:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:11.526 BaseBdev2_malloc 00:25:11.526 13:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:11.785 [2024-07-26 13:24:52.210368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:11.785 [2024-07-26 13:24:52.210402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.785 [2024-07-26 13:24:52.210418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2704130 00:25:11.785 [2024-07-26 13:24:52.210429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.785 [2024-07-26 13:24:52.211730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.785 [2024-07-26 13:24:52.211755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:11.785 BaseBdev2 00:25:11.785 13:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:11.785 13:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:12.044 BaseBdev3_malloc 00:25:12.044 13:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:12.303 [2024-07-26 13:24:52.655796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:12.303 [2024-07-26 13:24:52.655835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.303 [2024-07-26 13:24:52.655852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26fa420 00:25:12.303 [2024-07-26 13:24:52.655863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.303 [2024-07-26 13:24:52.657119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.303 [2024-07-26 13:24:52.657152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:12.303 BaseBdev3 00:25:12.303 13:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:12.303 13:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:12.562 BaseBdev4_malloc 00:25:12.562 13:24:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:12.824 [2024-07-26 13:24:53.117042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:12.824 [2024-07-26 13:24:53.117079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.824 [2024-07-26 13:24:53.117095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26fad40 00:25:12.824 [2024-07-26 13:24:53.117106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.824 [2024-07-26 13:24:53.118349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.824 [2024-07-26 13:24:53.118374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:12.824 BaseBdev4 00:25:12.824 13:24:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:13.091 spare_malloc 00:25:13.091 13:24:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:13.091 spare_delay 00:25:13.091 13:24:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:13.349 [2024-07-26 13:24:53.786839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:13.349 [2024-07-26 13:24:53.786876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.349 [2024-07-26 13:24:53.786892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2559db0 00:25:13.349 [2024-07-26 13:24:53.786903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.349 [2024-07-26 13:24:53.788163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.349 [2024-07-26 13:24:53.788187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:13.349 spare 00:25:13.349 13:24:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:13.609 [2024-07-26 13:24:54.011457] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:13.609 [2024-07-26 13:24:54.012512] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:13.609 [2024-07-26 13:24:54.012558] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:13.609 [2024-07-26 13:24:54.012598] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:13.609 [2024-07-26 13:24:54.012670] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x255c5b0 00:25:13.609 [2024-07-26 13:24:54.012679] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:13.609 [2024-07-26 13:24:54.012857] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x255f380 00:25:13.609 [2024-07-26 13:24:54.012984] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x255c5b0 00:25:13.609 [2024-07-26 13:24:54.012993] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x255c5b0 00:25:13.609 [2024-07-26 13:24:54.013088] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:13.609 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:13.609 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:13.609 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:13.609 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:13.609 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:13.609 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:13.609 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.609 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.609 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.609 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.609 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.609 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.868 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:13.868 "name": "raid_bdev1", 00:25:13.868 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:13.868 "strip_size_kb": 0, 00:25:13.868 "state": "online", 00:25:13.868 "raid_level": "raid1", 00:25:13.868 "superblock": false, 00:25:13.868 "num_base_bdevs": 4, 00:25:13.868 "num_base_bdevs_discovered": 4, 00:25:13.868 "num_base_bdevs_operational": 4, 00:25:13.868 "base_bdevs_list": [ 00:25:13.868 { 00:25:13.868 "name": "BaseBdev1", 00:25:13.868 "uuid": "053f0f18-421d-5fcc-bfeb-5a033dc1c52a", 00:25:13.868 "is_configured": true, 00:25:13.868 "data_offset": 0, 00:25:13.868 "data_size": 65536 00:25:13.868 }, 00:25:13.868 { 00:25:13.868 "name": "BaseBdev2", 00:25:13.868 "uuid": "298ecac2-91e4-51fe-9d83-43218b34d6ca", 00:25:13.868 "is_configured": true, 00:25:13.868 "data_offset": 0, 00:25:13.868 "data_size": 65536 00:25:13.868 }, 00:25:13.868 { 00:25:13.868 "name": "BaseBdev3", 00:25:13.868 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:13.868 "is_configured": true, 00:25:13.868 "data_offset": 0, 00:25:13.868 "data_size": 65536 00:25:13.868 }, 00:25:13.868 { 00:25:13.868 "name": "BaseBdev4", 00:25:13.868 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:13.868 "is_configured": true, 00:25:13.868 "data_offset": 0, 00:25:13.868 "data_size": 65536 00:25:13.868 } 00:25:13.868 ] 00:25:13.868 }' 00:25:13.868 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:13.868 13:24:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:14.435 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:14.435 13:24:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:25:14.695 [2024-07-26 13:24:55.030584] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:14.695 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:25:14.695 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.695 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:15.263 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:25:15.263 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:25:15.263 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:15.263 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:15.263 [2024-07-26 13:24:55.665983] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x255f8e0 00:25:15.263 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:15.263 Zero copy mechanism will not be used. 00:25:15.263 Running I/O for 60 seconds... 00:25:15.263 [2024-07-26 13:24:55.774487] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:15.263 [2024-07-26 13:24:55.789421] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x255f8e0 00:25:15.522 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:15.522 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:15.522 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:15.522 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:15.522 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:15.522 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:15.522 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:15.522 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:15.522 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:15.522 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:15.522 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.522 13:24:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.781 13:24:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:15.781 "name": "raid_bdev1", 00:25:15.781 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:15.781 "strip_size_kb": 0, 00:25:15.781 "state": "online", 00:25:15.781 "raid_level": "raid1", 00:25:15.781 "superblock": false, 00:25:15.781 "num_base_bdevs": 4, 00:25:15.781 "num_base_bdevs_discovered": 3, 00:25:15.781 "num_base_bdevs_operational": 3, 00:25:15.781 "base_bdevs_list": [ 00:25:15.781 { 00:25:15.781 "name": null, 00:25:15.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.781 "is_configured": false, 00:25:15.781 "data_offset": 0, 00:25:15.781 "data_size": 65536 00:25:15.781 }, 00:25:15.781 { 00:25:15.781 "name": "BaseBdev2", 00:25:15.781 "uuid": "298ecac2-91e4-51fe-9d83-43218b34d6ca", 00:25:15.781 "is_configured": true, 00:25:15.781 "data_offset": 0, 00:25:15.781 "data_size": 65536 00:25:15.781 }, 00:25:15.781 { 00:25:15.781 "name": "BaseBdev3", 00:25:15.781 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:15.781 "is_configured": true, 00:25:15.781 "data_offset": 0, 00:25:15.781 "data_size": 65536 00:25:15.781 }, 00:25:15.781 { 00:25:15.781 "name": "BaseBdev4", 00:25:15.781 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:15.781 "is_configured": true, 00:25:15.781 "data_offset": 0, 00:25:15.781 "data_size": 65536 00:25:15.781 } 00:25:15.781 ] 00:25:15.781 }' 00:25:15.781 13:24:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:15.781 13:24:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:16.349 13:24:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:16.349 [2024-07-26 13:24:56.808994] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:16.349 13:24:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:16.608 [2024-07-26 13:24:56.877117] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25f7950 00:25:16.608 [2024-07-26 13:24:56.879380] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:16.608 [2024-07-26 13:24:56.998753] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:16.608 [2024-07-26 13:24:56.999839] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:16.867 [2024-07-26 13:24:57.220172] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:16.867 [2024-07-26 13:24:57.220425] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:17.126 [2024-07-26 13:24:57.553341] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:17.385 [2024-07-26 13:24:57.765571] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:17.385 [2024-07-26 13:24:57.765732] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:17.385 13:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:17.385 13:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:17.385 13:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:17.385 13:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:17.385 13:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:17.385 13:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.385 13:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.644 [2024-07-26 13:24:58.030889] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:17.644 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:17.644 "name": "raid_bdev1", 00:25:17.644 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:17.644 "strip_size_kb": 0, 00:25:17.644 "state": "online", 00:25:17.644 "raid_level": "raid1", 00:25:17.644 "superblock": false, 00:25:17.644 "num_base_bdevs": 4, 00:25:17.644 "num_base_bdevs_discovered": 4, 00:25:17.644 "num_base_bdevs_operational": 4, 00:25:17.644 "process": { 00:25:17.644 "type": "rebuild", 00:25:17.644 "target": "spare", 00:25:17.644 "progress": { 00:25:17.644 "blocks": 14336, 00:25:17.644 "percent": 21 00:25:17.644 } 00:25:17.644 }, 00:25:17.644 "base_bdevs_list": [ 00:25:17.644 { 00:25:17.644 "name": "spare", 00:25:17.644 "uuid": "b3988100-7ec0-5d6f-a205-fa2652da201c", 00:25:17.644 "is_configured": true, 00:25:17.644 "data_offset": 0, 00:25:17.644 "data_size": 65536 00:25:17.644 }, 00:25:17.644 { 00:25:17.644 "name": "BaseBdev2", 00:25:17.644 "uuid": "298ecac2-91e4-51fe-9d83-43218b34d6ca", 00:25:17.644 "is_configured": true, 00:25:17.644 "data_offset": 0, 00:25:17.644 "data_size": 65536 00:25:17.644 }, 00:25:17.644 { 00:25:17.644 "name": "BaseBdev3", 00:25:17.644 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:17.644 "is_configured": true, 00:25:17.644 "data_offset": 0, 00:25:17.644 "data_size": 65536 00:25:17.644 }, 00:25:17.644 { 00:25:17.644 "name": "BaseBdev4", 00:25:17.644 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:17.644 "is_configured": true, 00:25:17.644 "data_offset": 0, 00:25:17.644 "data_size": 65536 00:25:17.644 } 00:25:17.644 ] 00:25:17.644 }' 00:25:17.644 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:17.903 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:17.903 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:17.903 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:17.903 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:18.163 [2024-07-26 13:24:58.453457] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:18.163 [2024-07-26 13:24:58.481255] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:25:18.163 [2024-07-26 13:24:58.590691] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:18.163 [2024-07-26 13:24:58.602066] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.163 [2024-07-26 13:24:58.602094] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:18.163 [2024-07-26 13:24:58.602105] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:18.163 [2024-07-26 13:24:58.624657] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x255f8e0 00:25:18.163 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:18.163 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:18.163 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:18.163 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:18.163 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:18.163 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:18.163 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:18.163 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:18.163 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:18.163 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:18.163 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.163 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.513 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:18.513 "name": "raid_bdev1", 00:25:18.513 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:18.513 "strip_size_kb": 0, 00:25:18.513 "state": "online", 00:25:18.513 "raid_level": "raid1", 00:25:18.513 "superblock": false, 00:25:18.513 "num_base_bdevs": 4, 00:25:18.513 "num_base_bdevs_discovered": 3, 00:25:18.513 "num_base_bdevs_operational": 3, 00:25:18.513 "base_bdevs_list": [ 00:25:18.513 { 00:25:18.513 "name": null, 00:25:18.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:18.513 "is_configured": false, 00:25:18.513 "data_offset": 0, 00:25:18.513 "data_size": 65536 00:25:18.513 }, 00:25:18.513 { 00:25:18.513 "name": "BaseBdev2", 00:25:18.513 "uuid": "298ecac2-91e4-51fe-9d83-43218b34d6ca", 00:25:18.513 "is_configured": true, 00:25:18.513 "data_offset": 0, 00:25:18.513 "data_size": 65536 00:25:18.513 }, 00:25:18.513 { 00:25:18.513 "name": "BaseBdev3", 00:25:18.513 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:18.513 "is_configured": true, 00:25:18.513 "data_offset": 0, 00:25:18.513 "data_size": 65536 00:25:18.513 }, 00:25:18.513 { 00:25:18.513 "name": "BaseBdev4", 00:25:18.513 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:18.513 "is_configured": true, 00:25:18.513 "data_offset": 0, 00:25:18.513 "data_size": 65536 00:25:18.513 } 00:25:18.513 ] 00:25:18.513 }' 00:25:18.513 13:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:18.513 13:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:19.081 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:19.081 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:19.081 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:19.081 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:19.081 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:19.081 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.081 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.340 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:19.340 "name": "raid_bdev1", 00:25:19.340 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:19.340 "strip_size_kb": 0, 00:25:19.340 "state": "online", 00:25:19.340 "raid_level": "raid1", 00:25:19.340 "superblock": false, 00:25:19.340 "num_base_bdevs": 4, 00:25:19.340 "num_base_bdevs_discovered": 3, 00:25:19.340 "num_base_bdevs_operational": 3, 00:25:19.340 "base_bdevs_list": [ 00:25:19.340 { 00:25:19.340 "name": null, 00:25:19.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.340 "is_configured": false, 00:25:19.340 "data_offset": 0, 00:25:19.340 "data_size": 65536 00:25:19.340 }, 00:25:19.340 { 00:25:19.340 "name": "BaseBdev2", 00:25:19.340 "uuid": "298ecac2-91e4-51fe-9d83-43218b34d6ca", 00:25:19.340 "is_configured": true, 00:25:19.340 "data_offset": 0, 00:25:19.340 "data_size": 65536 00:25:19.340 }, 00:25:19.340 { 00:25:19.340 "name": "BaseBdev3", 00:25:19.340 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:19.340 "is_configured": true, 00:25:19.340 "data_offset": 0, 00:25:19.340 "data_size": 65536 00:25:19.340 }, 00:25:19.340 { 00:25:19.340 "name": "BaseBdev4", 00:25:19.340 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:19.340 "is_configured": true, 00:25:19.340 "data_offset": 0, 00:25:19.340 "data_size": 65536 00:25:19.340 } 00:25:19.340 ] 00:25:19.340 }' 00:25:19.340 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:19.340 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:19.340 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:19.340 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:19.340 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:19.600 [2024-07-26 13:24:59.947287] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:19.600 13:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:19.600 [2024-07-26 13:25:00.031540] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2586890 00:25:19.600 [2024-07-26 13:25:00.032997] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:19.859 [2024-07-26 13:25:00.177479] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:20.118 [2024-07-26 13:25:00.425717] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:20.376 [2024-07-26 13:25:00.774394] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:20.376 [2024-07-26 13:25:00.774769] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:25:20.635 [2024-07-26 13:25:00.986776] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:20.635 [2024-07-26 13:25:00.987113] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:20.635 13:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:20.635 13:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:20.635 13:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:20.635 13:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:20.635 13:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:20.635 13:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.635 13:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.895 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:20.895 "name": "raid_bdev1", 00:25:20.895 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:20.895 "strip_size_kb": 0, 00:25:20.895 "state": "online", 00:25:20.895 "raid_level": "raid1", 00:25:20.895 "superblock": false, 00:25:20.895 "num_base_bdevs": 4, 00:25:20.895 "num_base_bdevs_discovered": 4, 00:25:20.895 "num_base_bdevs_operational": 4, 00:25:20.895 "process": { 00:25:20.895 "type": "rebuild", 00:25:20.895 "target": "spare", 00:25:20.895 "progress": { 00:25:20.895 "blocks": 10240, 00:25:20.895 "percent": 15 00:25:20.895 } 00:25:20.895 }, 00:25:20.895 "base_bdevs_list": [ 00:25:20.895 { 00:25:20.895 "name": "spare", 00:25:20.895 "uuid": "b3988100-7ec0-5d6f-a205-fa2652da201c", 00:25:20.895 "is_configured": true, 00:25:20.895 "data_offset": 0, 00:25:20.895 "data_size": 65536 00:25:20.895 }, 00:25:20.895 { 00:25:20.895 "name": "BaseBdev2", 00:25:20.895 "uuid": "298ecac2-91e4-51fe-9d83-43218b34d6ca", 00:25:20.895 "is_configured": true, 00:25:20.895 "data_offset": 0, 00:25:20.895 "data_size": 65536 00:25:20.895 }, 00:25:20.895 { 00:25:20.895 "name": "BaseBdev3", 00:25:20.895 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:20.895 "is_configured": true, 00:25:20.895 "data_offset": 0, 00:25:20.895 "data_size": 65536 00:25:20.895 }, 00:25:20.895 { 00:25:20.895 "name": "BaseBdev4", 00:25:20.895 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:20.895 "is_configured": true, 00:25:20.895 "data_offset": 0, 00:25:20.895 "data_size": 65536 00:25:20.895 } 00:25:20.895 ] 00:25:20.895 }' 00:25:20.895 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:20.895 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:20.895 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:20.895 [2024-07-26 13:25:01.310214] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:20.895 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:20.895 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:25:20.895 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:25:20.895 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:25:20.895 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:25:20.895 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:21.154 [2024-07-26 13:25:01.544096] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:21.154 [2024-07-26 13:25:01.662392] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x255f8e0 00:25:21.154 [2024-07-26 13:25:01.662420] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x2586890 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:21.449 "name": "raid_bdev1", 00:25:21.449 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:21.449 "strip_size_kb": 0, 00:25:21.449 "state": "online", 00:25:21.449 "raid_level": "raid1", 00:25:21.449 "superblock": false, 00:25:21.449 "num_base_bdevs": 4, 00:25:21.449 "num_base_bdevs_discovered": 3, 00:25:21.449 "num_base_bdevs_operational": 3, 00:25:21.449 "process": { 00:25:21.449 "type": "rebuild", 00:25:21.449 "target": "spare", 00:25:21.449 "progress": { 00:25:21.449 "blocks": 20480, 00:25:21.449 "percent": 31 00:25:21.449 } 00:25:21.449 }, 00:25:21.449 "base_bdevs_list": [ 00:25:21.449 { 00:25:21.449 "name": "spare", 00:25:21.449 "uuid": "b3988100-7ec0-5d6f-a205-fa2652da201c", 00:25:21.449 "is_configured": true, 00:25:21.449 "data_offset": 0, 00:25:21.449 "data_size": 65536 00:25:21.449 }, 00:25:21.449 { 00:25:21.449 "name": null, 00:25:21.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.449 "is_configured": false, 00:25:21.449 "data_offset": 0, 00:25:21.449 "data_size": 65536 00:25:21.449 }, 00:25:21.449 { 00:25:21.449 "name": "BaseBdev3", 00:25:21.449 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:21.449 "is_configured": true, 00:25:21.449 "data_offset": 0, 00:25:21.449 "data_size": 65536 00:25:21.449 }, 00:25:21.449 { 00:25:21.449 "name": "BaseBdev4", 00:25:21.449 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:21.449 "is_configured": true, 00:25:21.449 "data_offset": 0, 00:25:21.449 "data_size": 65536 00:25:21.449 } 00:25:21.449 ] 00:25:21.449 }' 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:21.449 [2024-07-26 13:25:01.877204] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=887 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:21.449 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.735 13:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.735 13:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:21.735 "name": "raid_bdev1", 00:25:21.735 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:21.735 "strip_size_kb": 0, 00:25:21.735 "state": "online", 00:25:21.735 "raid_level": "raid1", 00:25:21.735 "superblock": false, 00:25:21.735 "num_base_bdevs": 4, 00:25:21.735 "num_base_bdevs_discovered": 3, 00:25:21.735 "num_base_bdevs_operational": 3, 00:25:21.735 "process": { 00:25:21.735 "type": "rebuild", 00:25:21.735 "target": "spare", 00:25:21.735 "progress": { 00:25:21.735 "blocks": 24576, 00:25:21.735 "percent": 37 00:25:21.735 } 00:25:21.735 }, 00:25:21.735 "base_bdevs_list": [ 00:25:21.735 { 00:25:21.735 "name": "spare", 00:25:21.735 "uuid": "b3988100-7ec0-5d6f-a205-fa2652da201c", 00:25:21.735 "is_configured": true, 00:25:21.735 "data_offset": 0, 00:25:21.735 "data_size": 65536 00:25:21.735 }, 00:25:21.735 { 00:25:21.735 "name": null, 00:25:21.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.735 "is_configured": false, 00:25:21.735 "data_offset": 0, 00:25:21.735 "data_size": 65536 00:25:21.735 }, 00:25:21.735 { 00:25:21.735 "name": "BaseBdev3", 00:25:21.735 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:21.735 "is_configured": true, 00:25:21.735 "data_offset": 0, 00:25:21.735 "data_size": 65536 00:25:21.735 }, 00:25:21.735 { 00:25:21.735 "name": "BaseBdev4", 00:25:21.735 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:21.735 "is_configured": true, 00:25:21.735 "data_offset": 0, 00:25:21.735 "data_size": 65536 00:25:21.735 } 00:25:21.735 ] 00:25:21.735 }' 00:25:21.735 13:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:21.735 13:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:21.735 13:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:21.994 13:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:21.994 13:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:22.563 [2024-07-26 13:25:02.918314] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:25:22.822 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:22.822 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:22.822 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:22.822 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:22.822 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:22.822 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:22.822 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.822 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.081 [2024-07-26 13:25:03.532492] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:25:23.081 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:23.081 "name": "raid_bdev1", 00:25:23.081 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:23.081 "strip_size_kb": 0, 00:25:23.081 "state": "online", 00:25:23.081 "raid_level": "raid1", 00:25:23.081 "superblock": false, 00:25:23.081 "num_base_bdevs": 4, 00:25:23.081 "num_base_bdevs_discovered": 3, 00:25:23.081 "num_base_bdevs_operational": 3, 00:25:23.081 "process": { 00:25:23.081 "type": "rebuild", 00:25:23.081 "target": "spare", 00:25:23.081 "progress": { 00:25:23.081 "blocks": 45056, 00:25:23.081 "percent": 68 00:25:23.081 } 00:25:23.081 }, 00:25:23.081 "base_bdevs_list": [ 00:25:23.081 { 00:25:23.081 "name": "spare", 00:25:23.081 "uuid": "b3988100-7ec0-5d6f-a205-fa2652da201c", 00:25:23.081 "is_configured": true, 00:25:23.081 "data_offset": 0, 00:25:23.081 "data_size": 65536 00:25:23.081 }, 00:25:23.081 { 00:25:23.081 "name": null, 00:25:23.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.081 "is_configured": false, 00:25:23.081 "data_offset": 0, 00:25:23.081 "data_size": 65536 00:25:23.081 }, 00:25:23.081 { 00:25:23.081 "name": "BaseBdev3", 00:25:23.081 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:23.081 "is_configured": true, 00:25:23.081 "data_offset": 0, 00:25:23.081 "data_size": 65536 00:25:23.081 }, 00:25:23.081 { 00:25:23.081 "name": "BaseBdev4", 00:25:23.081 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:23.081 "is_configured": true, 00:25:23.081 "data_offset": 0, 00:25:23.081 "data_size": 65536 00:25:23.081 } 00:25:23.081 ] 00:25:23.081 }' 00:25:23.081 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:23.081 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:23.081 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:23.081 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:23.081 13:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:23.340 [2024-07-26 13:25:03.860331] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:25:23.598 [2024-07-26 13:25:03.994304] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:25:23.856 [2024-07-26 13:25:04.321603] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:25:24.116 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:24.116 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:24.116 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:24.116 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:24.116 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:24.116 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:24.116 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.116 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.375 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:24.375 "name": "raid_bdev1", 00:25:24.375 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:24.375 "strip_size_kb": 0, 00:25:24.375 "state": "online", 00:25:24.375 "raid_level": "raid1", 00:25:24.375 "superblock": false, 00:25:24.375 "num_base_bdevs": 4, 00:25:24.375 "num_base_bdevs_discovered": 3, 00:25:24.375 "num_base_bdevs_operational": 3, 00:25:24.375 "process": { 00:25:24.375 "type": "rebuild", 00:25:24.375 "target": "spare", 00:25:24.375 "progress": { 00:25:24.375 "blocks": 63488, 00:25:24.375 "percent": 96 00:25:24.375 } 00:25:24.375 }, 00:25:24.375 "base_bdevs_list": [ 00:25:24.375 { 00:25:24.375 "name": "spare", 00:25:24.375 "uuid": "b3988100-7ec0-5d6f-a205-fa2652da201c", 00:25:24.375 "is_configured": true, 00:25:24.375 "data_offset": 0, 00:25:24.375 "data_size": 65536 00:25:24.375 }, 00:25:24.375 { 00:25:24.375 "name": null, 00:25:24.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.375 "is_configured": false, 00:25:24.375 "data_offset": 0, 00:25:24.375 "data_size": 65536 00:25:24.375 }, 00:25:24.375 { 00:25:24.375 "name": "BaseBdev3", 00:25:24.375 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:24.375 "is_configured": true, 00:25:24.375 "data_offset": 0, 00:25:24.375 "data_size": 65536 00:25:24.375 }, 00:25:24.375 { 00:25:24.375 "name": "BaseBdev4", 00:25:24.375 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:24.375 "is_configured": true, 00:25:24.375 "data_offset": 0, 00:25:24.375 "data_size": 65536 00:25:24.375 } 00:25:24.375 ] 00:25:24.375 }' 00:25:24.375 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:24.375 [2024-07-26 13:25:04.760799] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:24.375 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:24.375 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:24.375 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:24.375 13:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:24.375 [2024-07-26 13:25:04.867272] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:24.375 [2024-07-26 13:25:04.868812] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.313 13:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:25.313 13:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:25.313 13:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:25.313 13:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:25.313 13:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:25.313 13:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:25.313 13:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.313 13:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.573 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:25.573 "name": "raid_bdev1", 00:25:25.573 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:25.573 "strip_size_kb": 0, 00:25:25.573 "state": "online", 00:25:25.573 "raid_level": "raid1", 00:25:25.573 "superblock": false, 00:25:25.573 "num_base_bdevs": 4, 00:25:25.573 "num_base_bdevs_discovered": 3, 00:25:25.573 "num_base_bdevs_operational": 3, 00:25:25.573 "base_bdevs_list": [ 00:25:25.573 { 00:25:25.573 "name": "spare", 00:25:25.573 "uuid": "b3988100-7ec0-5d6f-a205-fa2652da201c", 00:25:25.573 "is_configured": true, 00:25:25.573 "data_offset": 0, 00:25:25.573 "data_size": 65536 00:25:25.573 }, 00:25:25.573 { 00:25:25.573 "name": null, 00:25:25.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.573 "is_configured": false, 00:25:25.573 "data_offset": 0, 00:25:25.573 "data_size": 65536 00:25:25.573 }, 00:25:25.573 { 00:25:25.573 "name": "BaseBdev3", 00:25:25.573 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:25.573 "is_configured": true, 00:25:25.573 "data_offset": 0, 00:25:25.573 "data_size": 65536 00:25:25.573 }, 00:25:25.573 { 00:25:25.573 "name": "BaseBdev4", 00:25:25.573 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:25.573 "is_configured": true, 00:25:25.573 "data_offset": 0, 00:25:25.573 "data_size": 65536 00:25:25.573 } 00:25:25.573 ] 00:25:25.573 }' 00:25:25.573 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:25.832 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:25.832 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:25.832 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:25.832 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:25:25.832 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:25.832 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:25.832 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:25.833 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:25.833 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:25.833 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.833 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:25.833 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:25.833 "name": "raid_bdev1", 00:25:25.833 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:25.833 "strip_size_kb": 0, 00:25:25.833 "state": "online", 00:25:25.833 "raid_level": "raid1", 00:25:25.833 "superblock": false, 00:25:25.833 "num_base_bdevs": 4, 00:25:25.833 "num_base_bdevs_discovered": 3, 00:25:25.833 "num_base_bdevs_operational": 3, 00:25:25.833 "base_bdevs_list": [ 00:25:25.833 { 00:25:25.833 "name": "spare", 00:25:25.833 "uuid": "b3988100-7ec0-5d6f-a205-fa2652da201c", 00:25:25.833 "is_configured": true, 00:25:25.833 "data_offset": 0, 00:25:25.833 "data_size": 65536 00:25:25.833 }, 00:25:25.833 { 00:25:25.833 "name": null, 00:25:25.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.833 "is_configured": false, 00:25:25.833 "data_offset": 0, 00:25:25.833 "data_size": 65536 00:25:25.833 }, 00:25:25.833 { 00:25:25.833 "name": "BaseBdev3", 00:25:25.833 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:25.833 "is_configured": true, 00:25:25.833 "data_offset": 0, 00:25:25.833 "data_size": 65536 00:25:25.833 }, 00:25:25.833 { 00:25:25.833 "name": "BaseBdev4", 00:25:25.833 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:25.833 "is_configured": true, 00:25:25.833 "data_offset": 0, 00:25:25.833 "data_size": 65536 00:25:25.833 } 00:25:25.833 ] 00:25:25.833 }' 00:25:25.833 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:25.833 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:25.833 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.093 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.352 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.352 "name": "raid_bdev1", 00:25:26.352 "uuid": "9c1ec310-b087-4fee-bc6c-5e1f387dbe93", 00:25:26.352 "strip_size_kb": 0, 00:25:26.352 "state": "online", 00:25:26.352 "raid_level": "raid1", 00:25:26.352 "superblock": false, 00:25:26.352 "num_base_bdevs": 4, 00:25:26.352 "num_base_bdevs_discovered": 3, 00:25:26.352 "num_base_bdevs_operational": 3, 00:25:26.352 "base_bdevs_list": [ 00:25:26.352 { 00:25:26.352 "name": "spare", 00:25:26.352 "uuid": "b3988100-7ec0-5d6f-a205-fa2652da201c", 00:25:26.352 "is_configured": true, 00:25:26.352 "data_offset": 0, 00:25:26.352 "data_size": 65536 00:25:26.352 }, 00:25:26.352 { 00:25:26.352 "name": null, 00:25:26.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.352 "is_configured": false, 00:25:26.352 "data_offset": 0, 00:25:26.352 "data_size": 65536 00:25:26.352 }, 00:25:26.352 { 00:25:26.352 "name": "BaseBdev3", 00:25:26.352 "uuid": "845c6e75-431c-5ad0-8759-e06cf92a2bba", 00:25:26.352 "is_configured": true, 00:25:26.352 "data_offset": 0, 00:25:26.352 "data_size": 65536 00:25:26.352 }, 00:25:26.352 { 00:25:26.352 "name": "BaseBdev4", 00:25:26.352 "uuid": "00c54caf-a39d-5cb7-8c8f-58756f4ac14e", 00:25:26.352 "is_configured": true, 00:25:26.352 "data_offset": 0, 00:25:26.352 "data_size": 65536 00:25:26.352 } 00:25:26.352 ] 00:25:26.352 }' 00:25:26.352 13:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.352 13:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:26.921 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:27.180 [2024-07-26 13:25:07.471269] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:27.181 [2024-07-26 13:25:07.471299] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:27.181 00:25:27.181 Latency(us) 00:25:27.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.181 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:27.181 raid_bdev1 : 11.80 95.24 285.71 0.00 0.00 14629.94 273.61 119957.09 00:25:27.181 =================================================================================================================== 00:25:27.181 Total : 95.24 285.71 0.00 0.00 14629.94 273.61 119957.09 00:25:27.181 [2024-07-26 13:25:07.502958] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.181 [2024-07-26 13:25:07.502986] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:27.181 [2024-07-26 13:25:07.503071] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:27.181 [2024-07-26 13:25:07.503082] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x255c5b0 name raid_bdev1, state offline 00:25:27.181 0 00:25:27.181 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.181 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:27.440 13:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:28.008 /dev/nbd0 00:25:28.008 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:28.008 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:28.008 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:28.008 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:25:28.008 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:28.009 1+0 records in 00:25:28.009 1+0 records out 00:25:28.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257141 s, 15.9 MB/s 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # continue 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:28.009 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:25:28.009 /dev/nbd1 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:28.268 1+0 records in 00:25:28.268 1+0 records out 00:25:28.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270333 s, 15.2 MB/s 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:28.268 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:28.269 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:28.269 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:28.269 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:28.269 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:28.269 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:28.528 13:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:25:28.787 /dev/nbd1 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:28.787 1+0 records in 00:25:28.787 1+0 records out 00:25:28.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024129 s, 17.0 MB/s 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:28.787 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:29.047 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:29.306 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 810242 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 810242 ']' 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 810242 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 810242 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 810242' 00:25:29.307 killing process with pid 810242 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 810242 00:25:29.307 Received shutdown signal, test time was about 14.093526 seconds 00:25:29.307 00:25:29.307 Latency(us) 00:25:29.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.307 =================================================================================================================== 00:25:29.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:29.307 [2024-07-26 13:25:09.796030] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:29.307 13:25:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 810242 00:25:29.307 [2024-07-26 13:25:09.831127] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:29.566 13:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:25:29.566 00:25:29.566 real 0m19.611s 00:25:29.566 user 0m30.096s 00:25:29.566 sys 0m3.270s 00:25:29.566 13:25:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:29.566 13:25:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:25:29.566 ************************************ 00:25:29.566 END TEST raid_rebuild_test_io 00:25:29.566 ************************************ 00:25:29.566 13:25:10 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:25:29.566 13:25:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:25:29.566 13:25:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:29.566 13:25:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:29.826 ************************************ 00:25:29.826 START TEST raid_rebuild_test_sb_io 00:25:29.826 ************************************ 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=813848 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 813848 /var/tmp/spdk-raid.sock 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 813848 ']' 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:29.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:29.826 13:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:29.826 [2024-07-26 13:25:10.183946] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:25:29.826 [2024-07-26 13:25:10.184003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid813848 ] 00:25:29.826 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:29.826 Zero copy mechanism will not be used. 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:29.827 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:29.827 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:29.827 [2024-07-26 13:25:10.317995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.086 [2024-07-26 13:25:10.402371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.086 [2024-07-26 13:25:10.456031] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:30.086 [2024-07-26 13:25:10.456062] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:30.655 13:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:30.655 13:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:25:30.655 13:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:30.655 13:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:30.914 BaseBdev1_malloc 00:25:30.914 13:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:31.173 [2024-07-26 13:25:11.524200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:31.173 [2024-07-26 13:25:11.524248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.173 [2024-07-26 13:25:11.524269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd7c5f0 00:25:31.173 [2024-07-26 13:25:11.524281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.173 [2024-07-26 13:25:11.525845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.173 [2024-07-26 13:25:11.525873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:31.173 BaseBdev1 00:25:31.173 13:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:31.173 13:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:31.433 BaseBdev2_malloc 00:25:31.433 13:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:31.692 [2024-07-26 13:25:11.985870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:31.692 [2024-07-26 13:25:11.985913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.692 [2024-07-26 13:25:11.985930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf20130 00:25:31.692 [2024-07-26 13:25:11.985941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.692 [2024-07-26 13:25:11.987272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.692 [2024-07-26 13:25:11.987299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:31.692 BaseBdev2 00:25:31.692 13:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:31.692 13:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:31.951 BaseBdev3_malloc 00:25:31.951 13:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:31.951 [2024-07-26 13:25:12.443346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:31.951 [2024-07-26 13:25:12.443383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:31.951 [2024-07-26 13:25:12.443399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf16420 00:25:31.951 [2024-07-26 13:25:12.443411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:31.951 [2024-07-26 13:25:12.444662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:31.952 [2024-07-26 13:25:12.444686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:31.952 BaseBdev3 00:25:31.952 13:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:25:31.952 13:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:32.211 BaseBdev4_malloc 00:25:32.211 13:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:32.470 [2024-07-26 13:25:12.900768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:32.470 [2024-07-26 13:25:12.900813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:32.470 [2024-07-26 13:25:12.900830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf16d40 00:25:32.470 [2024-07-26 13:25:12.900841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:32.470 [2024-07-26 13:25:12.902153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:32.470 [2024-07-26 13:25:12.902181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:32.470 BaseBdev4 00:25:32.470 13:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:32.730 spare_malloc 00:25:32.730 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:32.988 spare_delay 00:25:32.989 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:33.247 [2024-07-26 13:25:13.586834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:33.247 [2024-07-26 13:25:13.586877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:33.247 [2024-07-26 13:25:13.586895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd75db0 00:25:33.247 [2024-07-26 13:25:13.586906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:33.247 [2024-07-26 13:25:13.588239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:33.247 [2024-07-26 13:25:13.588266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:33.247 spare 00:25:33.247 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:33.506 [2024-07-26 13:25:13.815467] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:33.506 [2024-07-26 13:25:13.816653] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:33.506 [2024-07-26 13:25:13.816708] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:33.506 [2024-07-26 13:25:13.816750] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:33.506 [2024-07-26 13:25:13.816916] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd785b0 00:25:33.506 [2024-07-26 13:25:13.816927] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:33.506 [2024-07-26 13:25:13.817112] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd7b3d0 00:25:33.506 [2024-07-26 13:25:13.817257] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd785b0 00:25:33.506 [2024-07-26 13:25:13.817267] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd785b0 00:25:33.506 [2024-07-26 13:25:13.817365] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.506 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:33.506 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:33.506 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:33.506 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:33.506 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:33.506 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:33.506 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.506 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.506 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.506 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.506 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.506 13:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.765 13:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:33.765 "name": "raid_bdev1", 00:25:33.765 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:33.765 "strip_size_kb": 0, 00:25:33.765 "state": "online", 00:25:33.765 "raid_level": "raid1", 00:25:33.765 "superblock": true, 00:25:33.765 "num_base_bdevs": 4, 00:25:33.765 "num_base_bdevs_discovered": 4, 00:25:33.765 "num_base_bdevs_operational": 4, 00:25:33.765 "base_bdevs_list": [ 00:25:33.765 { 00:25:33.765 "name": "BaseBdev1", 00:25:33.765 "uuid": "8bbc3b5c-d607-5f1b-bb5b-433cc68ef4d2", 00:25:33.765 "is_configured": true, 00:25:33.765 "data_offset": 2048, 00:25:33.765 "data_size": 63488 00:25:33.765 }, 00:25:33.765 { 00:25:33.765 "name": "BaseBdev2", 00:25:33.765 "uuid": "f54f832f-6643-56a7-b9f1-75caebb80379", 00:25:33.765 "is_configured": true, 00:25:33.765 "data_offset": 2048, 00:25:33.765 "data_size": 63488 00:25:33.765 }, 00:25:33.765 { 00:25:33.765 "name": "BaseBdev3", 00:25:33.765 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:33.765 "is_configured": true, 00:25:33.765 "data_offset": 2048, 00:25:33.765 "data_size": 63488 00:25:33.765 }, 00:25:33.765 { 00:25:33.765 "name": "BaseBdev4", 00:25:33.765 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:33.765 "is_configured": true, 00:25:33.765 "data_offset": 2048, 00:25:33.765 "data_size": 63488 00:25:33.765 } 00:25:33.765 ] 00:25:33.765 }' 00:25:33.765 13:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:33.765 13:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:34.340 13:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:34.340 13:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:25:34.340 [2024-07-26 13:25:14.854551] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:34.642 13:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:25:34.643 13:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.643 13:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:34.643 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:25:34.643 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:25:34.643 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:34.643 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:34.901 [2024-07-26 13:25:15.201192] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd7b930 00:25:34.901 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:34.901 Zero copy mechanism will not be used. 00:25:34.901 Running I/O for 60 seconds... 00:25:34.901 [2024-07-26 13:25:15.310957] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:34.901 [2024-07-26 13:25:15.325880] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xd7b930 00:25:34.901 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:34.901 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:34.901 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:34.901 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:34.901 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:34.901 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:34.901 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:34.901 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:34.901 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:34.901 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:34.901 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.901 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.160 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:35.160 "name": "raid_bdev1", 00:25:35.160 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:35.160 "strip_size_kb": 0, 00:25:35.160 "state": "online", 00:25:35.160 "raid_level": "raid1", 00:25:35.160 "superblock": true, 00:25:35.160 "num_base_bdevs": 4, 00:25:35.160 "num_base_bdevs_discovered": 3, 00:25:35.160 "num_base_bdevs_operational": 3, 00:25:35.160 "base_bdevs_list": [ 00:25:35.160 { 00:25:35.160 "name": null, 00:25:35.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.160 "is_configured": false, 00:25:35.160 "data_offset": 2048, 00:25:35.160 "data_size": 63488 00:25:35.160 }, 00:25:35.160 { 00:25:35.160 "name": "BaseBdev2", 00:25:35.160 "uuid": "f54f832f-6643-56a7-b9f1-75caebb80379", 00:25:35.160 "is_configured": true, 00:25:35.160 "data_offset": 2048, 00:25:35.160 "data_size": 63488 00:25:35.160 }, 00:25:35.160 { 00:25:35.160 "name": "BaseBdev3", 00:25:35.160 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:35.160 "is_configured": true, 00:25:35.160 "data_offset": 2048, 00:25:35.160 "data_size": 63488 00:25:35.160 }, 00:25:35.160 { 00:25:35.160 "name": "BaseBdev4", 00:25:35.160 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:35.160 "is_configured": true, 00:25:35.160 "data_offset": 2048, 00:25:35.160 "data_size": 63488 00:25:35.160 } 00:25:35.160 ] 00:25:35.160 }' 00:25:35.160 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:35.160 13:25:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:35.729 13:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:35.989 [2024-07-26 13:25:16.386344] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:35.989 13:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:35.989 [2024-07-26 13:25:16.445181] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe13950 00:25:35.989 [2024-07-26 13:25:16.447406] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:36.248 [2024-07-26 13:25:16.557514] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:36.248 [2024-07-26 13:25:16.557874] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:36.248 [2024-07-26 13:25:16.686512] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:36.248 [2024-07-26 13:25:16.686671] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:36.817 [2024-07-26 13:25:17.165021] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:36.817 [2024-07-26 13:25:17.165192] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:37.076 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:37.076 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:37.076 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:37.076 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:37.076 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:37.076 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.076 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.076 [2024-07-26 13:25:17.516672] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:37.076 [2024-07-26 13:25:17.517683] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:37.336 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:37.336 "name": "raid_bdev1", 00:25:37.336 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:37.336 "strip_size_kb": 0, 00:25:37.336 "state": "online", 00:25:37.336 "raid_level": "raid1", 00:25:37.336 "superblock": true, 00:25:37.336 "num_base_bdevs": 4, 00:25:37.336 "num_base_bdevs_discovered": 4, 00:25:37.336 "num_base_bdevs_operational": 4, 00:25:37.336 "process": { 00:25:37.336 "type": "rebuild", 00:25:37.336 "target": "spare", 00:25:37.336 "progress": { 00:25:37.336 "blocks": 14336, 00:25:37.336 "percent": 22 00:25:37.336 } 00:25:37.336 }, 00:25:37.336 "base_bdevs_list": [ 00:25:37.336 { 00:25:37.336 "name": "spare", 00:25:37.336 "uuid": "c78e3662-0e16-5139-a6a8-6d37a64cc128", 00:25:37.336 "is_configured": true, 00:25:37.336 "data_offset": 2048, 00:25:37.336 "data_size": 63488 00:25:37.336 }, 00:25:37.336 { 00:25:37.336 "name": "BaseBdev2", 00:25:37.336 "uuid": "f54f832f-6643-56a7-b9f1-75caebb80379", 00:25:37.336 "is_configured": true, 00:25:37.336 "data_offset": 2048, 00:25:37.336 "data_size": 63488 00:25:37.336 }, 00:25:37.336 { 00:25:37.336 "name": "BaseBdev3", 00:25:37.336 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:37.336 "is_configured": true, 00:25:37.336 "data_offset": 2048, 00:25:37.336 "data_size": 63488 00:25:37.336 }, 00:25:37.336 { 00:25:37.336 "name": "BaseBdev4", 00:25:37.336 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:37.336 "is_configured": true, 00:25:37.336 "data_offset": 2048, 00:25:37.336 "data_size": 63488 00:25:37.336 } 00:25:37.336 ] 00:25:37.336 }' 00:25:37.336 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:37.336 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:37.336 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:37.336 [2024-07-26 13:25:17.738228] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:37.336 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:37.336 13:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:37.596 [2024-07-26 13:25:17.976054] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:37.596 [2024-07-26 13:25:18.076654] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:37.596 [2024-07-26 13:25:18.077235] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:37.855 [2024-07-26 13:25:18.187684] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:37.855 [2024-07-26 13:25:18.198949] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.855 [2024-07-26 13:25:18.198975] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:37.855 [2024-07-26 13:25:18.198985] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:37.855 [2024-07-26 13:25:18.228532] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xd7b930 00:25:37.855 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:37.855 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:37.855 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:37.855 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:37.855 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:37.855 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:37.855 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:37.855 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:37.855 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:37.855 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:37.855 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.855 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.114 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:38.114 "name": "raid_bdev1", 00:25:38.114 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:38.114 "strip_size_kb": 0, 00:25:38.114 "state": "online", 00:25:38.114 "raid_level": "raid1", 00:25:38.114 "superblock": true, 00:25:38.114 "num_base_bdevs": 4, 00:25:38.114 "num_base_bdevs_discovered": 3, 00:25:38.114 "num_base_bdevs_operational": 3, 00:25:38.114 "base_bdevs_list": [ 00:25:38.114 { 00:25:38.114 "name": null, 00:25:38.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.114 "is_configured": false, 00:25:38.114 "data_offset": 2048, 00:25:38.114 "data_size": 63488 00:25:38.114 }, 00:25:38.114 { 00:25:38.114 "name": "BaseBdev2", 00:25:38.114 "uuid": "f54f832f-6643-56a7-b9f1-75caebb80379", 00:25:38.114 "is_configured": true, 00:25:38.114 "data_offset": 2048, 00:25:38.114 "data_size": 63488 00:25:38.114 }, 00:25:38.114 { 00:25:38.114 "name": "BaseBdev3", 00:25:38.114 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:38.114 "is_configured": true, 00:25:38.114 "data_offset": 2048, 00:25:38.114 "data_size": 63488 00:25:38.114 }, 00:25:38.114 { 00:25:38.114 "name": "BaseBdev4", 00:25:38.114 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:38.114 "is_configured": true, 00:25:38.114 "data_offset": 2048, 00:25:38.114 "data_size": 63488 00:25:38.114 } 00:25:38.114 ] 00:25:38.114 }' 00:25:38.114 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:38.114 13:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:38.681 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:38.681 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:38.681 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:38.681 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:38.681 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:38.681 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.681 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.940 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:38.940 "name": "raid_bdev1", 00:25:38.940 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:38.940 "strip_size_kb": 0, 00:25:38.940 "state": "online", 00:25:38.940 "raid_level": "raid1", 00:25:38.940 "superblock": true, 00:25:38.940 "num_base_bdevs": 4, 00:25:38.940 "num_base_bdevs_discovered": 3, 00:25:38.940 "num_base_bdevs_operational": 3, 00:25:38.940 "base_bdevs_list": [ 00:25:38.940 { 00:25:38.940 "name": null, 00:25:38.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.940 "is_configured": false, 00:25:38.940 "data_offset": 2048, 00:25:38.940 "data_size": 63488 00:25:38.940 }, 00:25:38.940 { 00:25:38.940 "name": "BaseBdev2", 00:25:38.940 "uuid": "f54f832f-6643-56a7-b9f1-75caebb80379", 00:25:38.940 "is_configured": true, 00:25:38.940 "data_offset": 2048, 00:25:38.940 "data_size": 63488 00:25:38.940 }, 00:25:38.940 { 00:25:38.940 "name": "BaseBdev3", 00:25:38.940 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:38.940 "is_configured": true, 00:25:38.940 "data_offset": 2048, 00:25:38.940 "data_size": 63488 00:25:38.940 }, 00:25:38.940 { 00:25:38.940 "name": "BaseBdev4", 00:25:38.940 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:38.940 "is_configured": true, 00:25:38.940 "data_offset": 2048, 00:25:38.940 "data_size": 63488 00:25:38.940 } 00:25:38.940 ] 00:25:38.940 }' 00:25:38.940 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:38.940 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:38.940 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:38.940 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:38.940 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:39.199 [2024-07-26 13:25:19.678514] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:39.458 13:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:25:39.458 [2024-07-26 13:25:19.736795] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd7bc00 00:25:39.458 [2024-07-26 13:25:19.738208] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:39.458 [2024-07-26 13:25:19.849005] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:39.459 [2024-07-26 13:25:19.850113] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:25:39.718 [2024-07-26 13:25:20.059547] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:39.718 [2024-07-26 13:25:20.059731] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:25:40.286 [2024-07-26 13:25:20.557070] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:25:40.286 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:40.286 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:40.286 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:40.286 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:40.286 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:40.286 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.286 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.545 [2024-07-26 13:25:20.888778] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:40.545 [2024-07-26 13:25:20.889112] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:25:40.545 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:40.545 "name": "raid_bdev1", 00:25:40.545 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:40.545 "strip_size_kb": 0, 00:25:40.545 "state": "online", 00:25:40.545 "raid_level": "raid1", 00:25:40.545 "superblock": true, 00:25:40.545 "num_base_bdevs": 4, 00:25:40.545 "num_base_bdevs_discovered": 4, 00:25:40.545 "num_base_bdevs_operational": 4, 00:25:40.545 "process": { 00:25:40.545 "type": "rebuild", 00:25:40.545 "target": "spare", 00:25:40.545 "progress": { 00:25:40.545 "blocks": 14336, 00:25:40.545 "percent": 22 00:25:40.545 } 00:25:40.545 }, 00:25:40.545 "base_bdevs_list": [ 00:25:40.545 { 00:25:40.545 "name": "spare", 00:25:40.545 "uuid": "c78e3662-0e16-5139-a6a8-6d37a64cc128", 00:25:40.545 "is_configured": true, 00:25:40.545 "data_offset": 2048, 00:25:40.545 "data_size": 63488 00:25:40.545 }, 00:25:40.545 { 00:25:40.545 "name": "BaseBdev2", 00:25:40.545 "uuid": "f54f832f-6643-56a7-b9f1-75caebb80379", 00:25:40.545 "is_configured": true, 00:25:40.545 "data_offset": 2048, 00:25:40.545 "data_size": 63488 00:25:40.545 }, 00:25:40.545 { 00:25:40.545 "name": "BaseBdev3", 00:25:40.545 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:40.545 "is_configured": true, 00:25:40.545 "data_offset": 2048, 00:25:40.545 "data_size": 63488 00:25:40.545 }, 00:25:40.545 { 00:25:40.545 "name": "BaseBdev4", 00:25:40.545 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:40.545 "is_configured": true, 00:25:40.545 "data_offset": 2048, 00:25:40.545 "data_size": 63488 00:25:40.545 } 00:25:40.545 ] 00:25:40.545 }' 00:25:40.545 13:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:40.545 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:40.545 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:40.545 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:40.545 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:25:40.545 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:25:40.545 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:25:40.545 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:25:40.545 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:25:40.545 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:25:40.545 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:40.804 [2024-07-26 13:25:21.093179] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:25:40.804 [2024-07-26 13:25:21.270212] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:41.063 [2024-07-26 13:25:21.507956] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xd7b930 00:25:41.063 [2024-07-26 13:25:21.507984] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xd7bc00 00:25:41.063 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:25:41.063 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:25:41.063 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:41.063 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:41.063 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:41.063 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:41.063 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:41.063 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.063 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.321 [2024-07-26 13:25:21.745006] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:25:41.321 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:41.321 "name": "raid_bdev1", 00:25:41.321 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:41.321 "strip_size_kb": 0, 00:25:41.321 "state": "online", 00:25:41.321 "raid_level": "raid1", 00:25:41.321 "superblock": true, 00:25:41.321 "num_base_bdevs": 4, 00:25:41.321 "num_base_bdevs_discovered": 3, 00:25:41.321 "num_base_bdevs_operational": 3, 00:25:41.321 "process": { 00:25:41.321 "type": "rebuild", 00:25:41.321 "target": "spare", 00:25:41.321 "progress": { 00:25:41.321 "blocks": 22528, 00:25:41.321 "percent": 35 00:25:41.321 } 00:25:41.321 }, 00:25:41.321 "base_bdevs_list": [ 00:25:41.321 { 00:25:41.321 "name": "spare", 00:25:41.321 "uuid": "c78e3662-0e16-5139-a6a8-6d37a64cc128", 00:25:41.321 "is_configured": true, 00:25:41.321 "data_offset": 2048, 00:25:41.321 "data_size": 63488 00:25:41.321 }, 00:25:41.321 { 00:25:41.321 "name": null, 00:25:41.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.321 "is_configured": false, 00:25:41.321 "data_offset": 2048, 00:25:41.321 "data_size": 63488 00:25:41.321 }, 00:25:41.321 { 00:25:41.321 "name": "BaseBdev3", 00:25:41.321 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:41.321 "is_configured": true, 00:25:41.321 "data_offset": 2048, 00:25:41.321 "data_size": 63488 00:25:41.321 }, 00:25:41.321 { 00:25:41.321 "name": "BaseBdev4", 00:25:41.321 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:41.321 "is_configured": true, 00:25:41.321 "data_offset": 2048, 00:25:41.321 "data_size": 63488 00:25:41.321 } 00:25:41.321 ] 00:25:41.321 }' 00:25:41.321 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:41.321 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:41.321 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:41.580 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:41.580 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=907 00:25:41.580 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:41.580 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:41.580 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:41.580 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:41.580 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:41.580 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:41.580 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.580 13:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.580 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:41.580 "name": "raid_bdev1", 00:25:41.580 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:41.580 "strip_size_kb": 0, 00:25:41.580 "state": "online", 00:25:41.580 "raid_level": "raid1", 00:25:41.580 "superblock": true, 00:25:41.580 "num_base_bdevs": 4, 00:25:41.580 "num_base_bdevs_discovered": 3, 00:25:41.580 "num_base_bdevs_operational": 3, 00:25:41.580 "process": { 00:25:41.580 "type": "rebuild", 00:25:41.580 "target": "spare", 00:25:41.580 "progress": { 00:25:41.580 "blocks": 24576, 00:25:41.580 "percent": 38 00:25:41.580 } 00:25:41.580 }, 00:25:41.580 "base_bdevs_list": [ 00:25:41.580 { 00:25:41.580 "name": "spare", 00:25:41.580 "uuid": "c78e3662-0e16-5139-a6a8-6d37a64cc128", 00:25:41.580 "is_configured": true, 00:25:41.580 "data_offset": 2048, 00:25:41.580 "data_size": 63488 00:25:41.580 }, 00:25:41.580 { 00:25:41.580 "name": null, 00:25:41.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.580 "is_configured": false, 00:25:41.580 "data_offset": 2048, 00:25:41.580 "data_size": 63488 00:25:41.580 }, 00:25:41.580 { 00:25:41.580 "name": "BaseBdev3", 00:25:41.580 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:41.580 "is_configured": true, 00:25:41.580 "data_offset": 2048, 00:25:41.580 "data_size": 63488 00:25:41.580 }, 00:25:41.580 { 00:25:41.580 "name": "BaseBdev4", 00:25:41.580 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:41.580 "is_configured": true, 00:25:41.580 "data_offset": 2048, 00:25:41.580 "data_size": 63488 00:25:41.580 } 00:25:41.580 ] 00:25:41.580 }' 00:25:41.580 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:41.580 [2024-07-26 13:25:22.098610] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:25:41.838 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:41.838 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:41.838 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:41.838 13:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:42.096 [2024-07-26 13:25:22.563063] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:25:42.354 [2024-07-26 13:25:22.680745] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:25:42.612 [2024-07-26 13:25:23.010405] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:25:42.879 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:42.880 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.880 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:42.880 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:42.880 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:42.880 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:42.880 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.880 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.880 [2024-07-26 13:25:23.221815] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:25:43.145 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:43.145 "name": "raid_bdev1", 00:25:43.145 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:43.145 "strip_size_kb": 0, 00:25:43.145 "state": "online", 00:25:43.145 "raid_level": "raid1", 00:25:43.145 "superblock": true, 00:25:43.145 "num_base_bdevs": 4, 00:25:43.145 "num_base_bdevs_discovered": 3, 00:25:43.146 "num_base_bdevs_operational": 3, 00:25:43.146 "process": { 00:25:43.146 "type": "rebuild", 00:25:43.146 "target": "spare", 00:25:43.146 "progress": { 00:25:43.146 "blocks": 47104, 00:25:43.146 "percent": 74 00:25:43.146 } 00:25:43.146 }, 00:25:43.146 "base_bdevs_list": [ 00:25:43.146 { 00:25:43.146 "name": "spare", 00:25:43.146 "uuid": "c78e3662-0e16-5139-a6a8-6d37a64cc128", 00:25:43.146 "is_configured": true, 00:25:43.146 "data_offset": 2048, 00:25:43.146 "data_size": 63488 00:25:43.146 }, 00:25:43.146 { 00:25:43.146 "name": null, 00:25:43.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.146 "is_configured": false, 00:25:43.146 "data_offset": 2048, 00:25:43.146 "data_size": 63488 00:25:43.146 }, 00:25:43.146 { 00:25:43.146 "name": "BaseBdev3", 00:25:43.146 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:43.146 "is_configured": true, 00:25:43.146 "data_offset": 2048, 00:25:43.146 "data_size": 63488 00:25:43.146 }, 00:25:43.146 { 00:25:43.146 "name": "BaseBdev4", 00:25:43.146 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:43.146 "is_configured": true, 00:25:43.146 "data_offset": 2048, 00:25:43.146 "data_size": 63488 00:25:43.146 } 00:25:43.146 ] 00:25:43.146 }' 00:25:43.146 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:43.146 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:43.146 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:43.146 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:43.146 13:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:25:43.734 [2024-07-26 13:25:24.229442] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:43.992 [2024-07-26 13:25:24.337052] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:43.992 [2024-07-26 13:25:24.339058] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:43.992 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:25:43.992 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:43.992 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:43.992 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:43.992 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:43.992 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:43.992 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.992 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.250 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:44.250 "name": "raid_bdev1", 00:25:44.250 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:44.251 "strip_size_kb": 0, 00:25:44.251 "state": "online", 00:25:44.251 "raid_level": "raid1", 00:25:44.251 "superblock": true, 00:25:44.251 "num_base_bdevs": 4, 00:25:44.251 "num_base_bdevs_discovered": 3, 00:25:44.251 "num_base_bdevs_operational": 3, 00:25:44.251 "base_bdevs_list": [ 00:25:44.251 { 00:25:44.251 "name": "spare", 00:25:44.251 "uuid": "c78e3662-0e16-5139-a6a8-6d37a64cc128", 00:25:44.251 "is_configured": true, 00:25:44.251 "data_offset": 2048, 00:25:44.251 "data_size": 63488 00:25:44.251 }, 00:25:44.251 { 00:25:44.251 "name": null, 00:25:44.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.251 "is_configured": false, 00:25:44.251 "data_offset": 2048, 00:25:44.251 "data_size": 63488 00:25:44.251 }, 00:25:44.251 { 00:25:44.251 "name": "BaseBdev3", 00:25:44.251 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:44.251 "is_configured": true, 00:25:44.251 "data_offset": 2048, 00:25:44.251 "data_size": 63488 00:25:44.251 }, 00:25:44.251 { 00:25:44.251 "name": "BaseBdev4", 00:25:44.251 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:44.251 "is_configured": true, 00:25:44.251 "data_offset": 2048, 00:25:44.251 "data_size": 63488 00:25:44.251 } 00:25:44.251 ] 00:25:44.251 }' 00:25:44.251 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:44.509 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:44.509 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:44.509 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:25:44.509 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:25:44.509 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:44.509 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:44.509 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:44.509 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:44.509 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:44.509 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.509 13:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:44.767 "name": "raid_bdev1", 00:25:44.767 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:44.767 "strip_size_kb": 0, 00:25:44.767 "state": "online", 00:25:44.767 "raid_level": "raid1", 00:25:44.767 "superblock": true, 00:25:44.767 "num_base_bdevs": 4, 00:25:44.767 "num_base_bdevs_discovered": 3, 00:25:44.767 "num_base_bdevs_operational": 3, 00:25:44.767 "base_bdevs_list": [ 00:25:44.767 { 00:25:44.767 "name": "spare", 00:25:44.767 "uuid": "c78e3662-0e16-5139-a6a8-6d37a64cc128", 00:25:44.767 "is_configured": true, 00:25:44.767 "data_offset": 2048, 00:25:44.767 "data_size": 63488 00:25:44.767 }, 00:25:44.767 { 00:25:44.767 "name": null, 00:25:44.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:44.767 "is_configured": false, 00:25:44.767 "data_offset": 2048, 00:25:44.767 "data_size": 63488 00:25:44.767 }, 00:25:44.767 { 00:25:44.767 "name": "BaseBdev3", 00:25:44.767 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:44.767 "is_configured": true, 00:25:44.767 "data_offset": 2048, 00:25:44.767 "data_size": 63488 00:25:44.767 }, 00:25:44.767 { 00:25:44.767 "name": "BaseBdev4", 00:25:44.767 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:44.767 "is_configured": true, 00:25:44.767 "data_offset": 2048, 00:25:44.767 "data_size": 63488 00:25:44.767 } 00:25:44.767 ] 00:25:44.767 }' 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.767 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.025 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:45.025 "name": "raid_bdev1", 00:25:45.025 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:45.025 "strip_size_kb": 0, 00:25:45.025 "state": "online", 00:25:45.025 "raid_level": "raid1", 00:25:45.025 "superblock": true, 00:25:45.025 "num_base_bdevs": 4, 00:25:45.025 "num_base_bdevs_discovered": 3, 00:25:45.025 "num_base_bdevs_operational": 3, 00:25:45.025 "base_bdevs_list": [ 00:25:45.025 { 00:25:45.025 "name": "spare", 00:25:45.025 "uuid": "c78e3662-0e16-5139-a6a8-6d37a64cc128", 00:25:45.025 "is_configured": true, 00:25:45.025 "data_offset": 2048, 00:25:45.025 "data_size": 63488 00:25:45.025 }, 00:25:45.025 { 00:25:45.025 "name": null, 00:25:45.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.025 "is_configured": false, 00:25:45.025 "data_offset": 2048, 00:25:45.025 "data_size": 63488 00:25:45.025 }, 00:25:45.025 { 00:25:45.025 "name": "BaseBdev3", 00:25:45.025 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:45.025 "is_configured": true, 00:25:45.025 "data_offset": 2048, 00:25:45.025 "data_size": 63488 00:25:45.025 }, 00:25:45.025 { 00:25:45.025 "name": "BaseBdev4", 00:25:45.025 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:45.025 "is_configured": true, 00:25:45.025 "data_offset": 2048, 00:25:45.025 "data_size": 63488 00:25:45.025 } 00:25:45.025 ] 00:25:45.025 }' 00:25:45.025 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:45.025 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:45.592 13:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:45.851 [2024-07-26 13:25:26.173315] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:45.851 [2024-07-26 13:25:26.173346] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:45.851 00:25:45.851 Latency(us) 00:25:45.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.851 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:25:45.851 raid_bdev1 : 10.98 99.31 297.94 0.00 0.00 13464.66 278.53 110729.63 00:25:45.851 =================================================================================================================== 00:25:45.851 Total : 99.31 297.94 0.00 0.00 13464.66 278.53 110729.63 00:25:45.851 [2024-07-26 13:25:26.209050] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.851 [2024-07-26 13:25:26.209077] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:45.851 [2024-07-26 13:25:26.209171] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:45.851 [2024-07-26 13:25:26.209183] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd785b0 name raid_bdev1, state offline 00:25:45.851 0 00:25:45.851 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:25:45.851 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.109 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:25:46.368 /dev/nbd0 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:46.368 1+0 records in 00:25:46.368 1+0 records out 00:25:46.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259414 s, 15.8 MB/s 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:25:46.368 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # continue 00:25:46.369 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:46.369 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:25:46.369 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:25:46.369 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:46.369 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:25:46.369 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:46.369 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:46.369 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:46.369 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:46.369 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:46.369 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.369 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:25:46.628 /dev/nbd1 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:46.628 1+0 records in 00:25:46.628 1+0 records out 00:25:46.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027551 s, 14.9 MB/s 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.628 13:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:46.628 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:46.628 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:46.628 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:46.628 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:46.628 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:46.628 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:46.628 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.887 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:25:47.146 /dev/nbd1 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:47.146 1+0 records in 00:25:47.146 1+0 records out 00:25:47.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209843 s, 19.5 MB/s 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:47.146 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:47.450 13:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:47.729 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:47.729 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:47.729 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:47.729 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:47.729 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:47.729 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:47.729 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:25:47.729 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:25:47.729 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:25:47.729 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:47.988 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:48.248 [2024-07-26 13:25:28.576680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:48.248 [2024-07-26 13:25:28.576722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:48.248 [2024-07-26 13:25:28.576739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd79dd0 00:25:48.248 [2024-07-26 13:25:28.576751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:48.248 [2024-07-26 13:25:28.578270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:48.248 [2024-07-26 13:25:28.578297] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:48.248 [2024-07-26 13:25:28.578370] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:48.248 [2024-07-26 13:25:28.578396] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:48.248 [2024-07-26 13:25:28.578493] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:48.248 [2024-07-26 13:25:28.578561] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:48.248 spare 00:25:48.248 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:48.248 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:48.248 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:48.248 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:48.248 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:48.248 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:48.248 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.248 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.248 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.248 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.248 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.248 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:48.248 [2024-07-26 13:25:28.678871] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd770f0 00:25:48.248 [2024-07-26 13:25:28.678886] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:48.248 [2024-07-26 13:25:28.679065] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd7a910 00:25:48.248 [2024-07-26 13:25:28.679211] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd770f0 00:25:48.248 [2024-07-26 13:25:28.679221] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd770f0 00:25:48.248 [2024-07-26 13:25:28.679321] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:48.508 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:48.508 "name": "raid_bdev1", 00:25:48.508 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:48.508 "strip_size_kb": 0, 00:25:48.508 "state": "online", 00:25:48.508 "raid_level": "raid1", 00:25:48.508 "superblock": true, 00:25:48.508 "num_base_bdevs": 4, 00:25:48.508 "num_base_bdevs_discovered": 3, 00:25:48.508 "num_base_bdevs_operational": 3, 00:25:48.508 "base_bdevs_list": [ 00:25:48.508 { 00:25:48.508 "name": "spare", 00:25:48.508 "uuid": "c78e3662-0e16-5139-a6a8-6d37a64cc128", 00:25:48.508 "is_configured": true, 00:25:48.508 "data_offset": 2048, 00:25:48.508 "data_size": 63488 00:25:48.508 }, 00:25:48.508 { 00:25:48.508 "name": null, 00:25:48.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:48.508 "is_configured": false, 00:25:48.508 "data_offset": 2048, 00:25:48.508 "data_size": 63488 00:25:48.508 }, 00:25:48.508 { 00:25:48.508 "name": "BaseBdev3", 00:25:48.508 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:48.508 "is_configured": true, 00:25:48.508 "data_offset": 2048, 00:25:48.508 "data_size": 63488 00:25:48.508 }, 00:25:48.508 { 00:25:48.508 "name": "BaseBdev4", 00:25:48.508 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:48.508 "is_configured": true, 00:25:48.508 "data_offset": 2048, 00:25:48.508 "data_size": 63488 00:25:48.508 } 00:25:48.508 ] 00:25:48.508 }' 00:25:48.508 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:48.508 13:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:49.077 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:49.077 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:49.077 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:49.077 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:49.077 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:49.077 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.077 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:49.336 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:49.336 "name": "raid_bdev1", 00:25:49.336 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:49.336 "strip_size_kb": 0, 00:25:49.336 "state": "online", 00:25:49.336 "raid_level": "raid1", 00:25:49.336 "superblock": true, 00:25:49.336 "num_base_bdevs": 4, 00:25:49.336 "num_base_bdevs_discovered": 3, 00:25:49.336 "num_base_bdevs_operational": 3, 00:25:49.336 "base_bdevs_list": [ 00:25:49.336 { 00:25:49.336 "name": "spare", 00:25:49.337 "uuid": "c78e3662-0e16-5139-a6a8-6d37a64cc128", 00:25:49.337 "is_configured": true, 00:25:49.337 "data_offset": 2048, 00:25:49.337 "data_size": 63488 00:25:49.337 }, 00:25:49.337 { 00:25:49.337 "name": null, 00:25:49.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.337 "is_configured": false, 00:25:49.337 "data_offset": 2048, 00:25:49.337 "data_size": 63488 00:25:49.337 }, 00:25:49.337 { 00:25:49.337 "name": "BaseBdev3", 00:25:49.337 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:49.337 "is_configured": true, 00:25:49.337 "data_offset": 2048, 00:25:49.337 "data_size": 63488 00:25:49.337 }, 00:25:49.337 { 00:25:49.337 "name": "BaseBdev4", 00:25:49.337 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:49.337 "is_configured": true, 00:25:49.337 "data_offset": 2048, 00:25:49.337 "data_size": 63488 00:25:49.337 } 00:25:49.337 ] 00:25:49.337 }' 00:25:49.337 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:49.337 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:49.337 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:49.337 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:49.337 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.337 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:49.596 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:25:49.596 13:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:49.856 [2024-07-26 13:25:30.161199] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:49.856 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:49.856 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:49.856 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:49.856 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:49.856 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:49.856 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:49.856 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:49.856 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:49.856 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:49.856 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:49.856 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.856 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.115 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:50.115 "name": "raid_bdev1", 00:25:50.115 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:50.115 "strip_size_kb": 0, 00:25:50.115 "state": "online", 00:25:50.115 "raid_level": "raid1", 00:25:50.115 "superblock": true, 00:25:50.115 "num_base_bdevs": 4, 00:25:50.115 "num_base_bdevs_discovered": 2, 00:25:50.115 "num_base_bdevs_operational": 2, 00:25:50.115 "base_bdevs_list": [ 00:25:50.115 { 00:25:50.115 "name": null, 00:25:50.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.115 "is_configured": false, 00:25:50.115 "data_offset": 2048, 00:25:50.115 "data_size": 63488 00:25:50.115 }, 00:25:50.115 { 00:25:50.115 "name": null, 00:25:50.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.115 "is_configured": false, 00:25:50.115 "data_offset": 2048, 00:25:50.115 "data_size": 63488 00:25:50.115 }, 00:25:50.115 { 00:25:50.115 "name": "BaseBdev3", 00:25:50.115 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:50.115 "is_configured": true, 00:25:50.115 "data_offset": 2048, 00:25:50.115 "data_size": 63488 00:25:50.115 }, 00:25:50.115 { 00:25:50.115 "name": "BaseBdev4", 00:25:50.115 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:50.115 "is_configured": true, 00:25:50.115 "data_offset": 2048, 00:25:50.115 "data_size": 63488 00:25:50.115 } 00:25:50.115 ] 00:25:50.115 }' 00:25:50.115 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:50.115 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:50.684 13:25:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:50.684 [2024-07-26 13:25:31.196054] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:50.684 [2024-07-26 13:25:31.196219] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:50.684 [2024-07-26 13:25:31.196236] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:50.684 [2024-07-26 13:25:31.196263] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:50.684 [2024-07-26 13:25:31.200513] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa7ab40 00:25:50.684 [2024-07-26 13:25:31.202606] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:50.943 13:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:25:51.881 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:51.881 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:51.881 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:51.881 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:51.881 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:51.881 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.881 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.881 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:51.881 "name": "raid_bdev1", 00:25:51.881 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:51.881 "strip_size_kb": 0, 00:25:51.881 "state": "online", 00:25:51.881 "raid_level": "raid1", 00:25:51.881 "superblock": true, 00:25:51.881 "num_base_bdevs": 4, 00:25:51.881 "num_base_bdevs_discovered": 3, 00:25:51.881 "num_base_bdevs_operational": 3, 00:25:51.881 "process": { 00:25:51.881 "type": "rebuild", 00:25:51.881 "target": "spare", 00:25:51.881 "progress": { 00:25:51.881 "blocks": 22528, 00:25:51.881 "percent": 35 00:25:51.881 } 00:25:51.881 }, 00:25:51.881 "base_bdevs_list": [ 00:25:51.881 { 00:25:51.881 "name": "spare", 00:25:51.881 "uuid": "c78e3662-0e16-5139-a6a8-6d37a64cc128", 00:25:51.881 "is_configured": true, 00:25:51.881 "data_offset": 2048, 00:25:51.881 "data_size": 63488 00:25:51.881 }, 00:25:51.881 { 00:25:51.882 "name": null, 00:25:51.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.882 "is_configured": false, 00:25:51.882 "data_offset": 2048, 00:25:51.882 "data_size": 63488 00:25:51.882 }, 00:25:51.882 { 00:25:51.882 "name": "BaseBdev3", 00:25:51.882 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:51.882 "is_configured": true, 00:25:51.882 "data_offset": 2048, 00:25:51.882 "data_size": 63488 00:25:51.882 }, 00:25:51.882 { 00:25:51.882 "name": "BaseBdev4", 00:25:51.882 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:51.882 "is_configured": true, 00:25:51.882 "data_offset": 2048, 00:25:51.882 "data_size": 63488 00:25:51.882 } 00:25:51.882 ] 00:25:51.882 }' 00:25:51.882 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:52.141 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:52.141 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:52.141 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:52.141 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:52.401 [2024-07-26 13:25:32.674252] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:52.401 [2024-07-26 13:25:32.713716] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:52.401 [2024-07-26 13:25:32.713758] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:52.401 [2024-07-26 13:25:32.713773] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:52.401 [2024-07-26 13:25:32.713781] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:52.401 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:52.401 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:52.401 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:52.401 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:52.401 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:52.401 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:52.401 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:52.401 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:52.401 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:52.401 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:52.401 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.401 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.660 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:52.660 "name": "raid_bdev1", 00:25:52.660 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:52.660 "strip_size_kb": 0, 00:25:52.660 "state": "online", 00:25:52.660 "raid_level": "raid1", 00:25:52.660 "superblock": true, 00:25:52.660 "num_base_bdevs": 4, 00:25:52.660 "num_base_bdevs_discovered": 2, 00:25:52.660 "num_base_bdevs_operational": 2, 00:25:52.660 "base_bdevs_list": [ 00:25:52.660 { 00:25:52.660 "name": null, 00:25:52.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.660 "is_configured": false, 00:25:52.660 "data_offset": 2048, 00:25:52.660 "data_size": 63488 00:25:52.660 }, 00:25:52.660 { 00:25:52.660 "name": null, 00:25:52.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.660 "is_configured": false, 00:25:52.660 "data_offset": 2048, 00:25:52.660 "data_size": 63488 00:25:52.660 }, 00:25:52.660 { 00:25:52.660 "name": "BaseBdev3", 00:25:52.660 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:52.660 "is_configured": true, 00:25:52.660 "data_offset": 2048, 00:25:52.660 "data_size": 63488 00:25:52.660 }, 00:25:52.660 { 00:25:52.660 "name": "BaseBdev4", 00:25:52.660 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:52.660 "is_configured": true, 00:25:52.660 "data_offset": 2048, 00:25:52.660 "data_size": 63488 00:25:52.660 } 00:25:52.660 ] 00:25:52.660 }' 00:25:52.660 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:52.660 13:25:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:53.227 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:53.487 [2024-07-26 13:25:33.756765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:53.487 [2024-07-26 13:25:33.756816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:53.487 [2024-07-26 13:25:33.756836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd77570 00:25:53.487 [2024-07-26 13:25:33.756847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:53.487 [2024-07-26 13:25:33.757216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:53.487 [2024-07-26 13:25:33.757233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:53.487 [2024-07-26 13:25:33.757310] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:25:53.487 [2024-07-26 13:25:33.757321] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:25:53.487 [2024-07-26 13:25:33.757331] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:25:53.487 [2024-07-26 13:25:33.757351] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:53.487 [2024-07-26 13:25:33.761637] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd73770 00:25:53.487 spare 00:25:53.487 [2024-07-26 13:25:33.763018] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:53.487 13:25:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:25:54.424 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:54.424 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:54.424 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:25:54.424 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:25:54.424 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:54.424 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.424 13:25:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.684 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:54.684 "name": "raid_bdev1", 00:25:54.684 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:54.684 "strip_size_kb": 0, 00:25:54.684 "state": "online", 00:25:54.684 "raid_level": "raid1", 00:25:54.684 "superblock": true, 00:25:54.684 "num_base_bdevs": 4, 00:25:54.684 "num_base_bdevs_discovered": 3, 00:25:54.684 "num_base_bdevs_operational": 3, 00:25:54.684 "process": { 00:25:54.684 "type": "rebuild", 00:25:54.684 "target": "spare", 00:25:54.684 "progress": { 00:25:54.684 "blocks": 24576, 00:25:54.684 "percent": 38 00:25:54.684 } 00:25:54.684 }, 00:25:54.684 "base_bdevs_list": [ 00:25:54.684 { 00:25:54.684 "name": "spare", 00:25:54.684 "uuid": "c78e3662-0e16-5139-a6a8-6d37a64cc128", 00:25:54.684 "is_configured": true, 00:25:54.684 "data_offset": 2048, 00:25:54.684 "data_size": 63488 00:25:54.684 }, 00:25:54.684 { 00:25:54.684 "name": null, 00:25:54.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.684 "is_configured": false, 00:25:54.684 "data_offset": 2048, 00:25:54.684 "data_size": 63488 00:25:54.684 }, 00:25:54.684 { 00:25:54.684 "name": "BaseBdev3", 00:25:54.684 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:54.684 "is_configured": true, 00:25:54.684 "data_offset": 2048, 00:25:54.684 "data_size": 63488 00:25:54.684 }, 00:25:54.684 { 00:25:54.684 "name": "BaseBdev4", 00:25:54.684 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:54.684 "is_configured": true, 00:25:54.684 "data_offset": 2048, 00:25:54.684 "data_size": 63488 00:25:54.684 } 00:25:54.684 ] 00:25:54.684 }' 00:25:54.684 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:54.684 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:54.684 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:54.684 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:25:54.684 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:54.943 [2024-07-26 13:25:35.312210] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:54.943 [2024-07-26 13:25:35.374838] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:54.943 [2024-07-26 13:25:35.374882] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:54.943 [2024-07-26 13:25:35.374896] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:54.943 [2024-07-26 13:25:35.374904] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:25:54.943 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:54.943 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:54.943 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:54.943 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:54.943 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:54.943 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:54.943 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:54.943 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:54.943 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:54.943 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:54.943 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.943 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.203 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:55.203 "name": "raid_bdev1", 00:25:55.203 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:55.203 "strip_size_kb": 0, 00:25:55.203 "state": "online", 00:25:55.203 "raid_level": "raid1", 00:25:55.203 "superblock": true, 00:25:55.203 "num_base_bdevs": 4, 00:25:55.203 "num_base_bdevs_discovered": 2, 00:25:55.203 "num_base_bdevs_operational": 2, 00:25:55.203 "base_bdevs_list": [ 00:25:55.203 { 00:25:55.203 "name": null, 00:25:55.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.203 "is_configured": false, 00:25:55.203 "data_offset": 2048, 00:25:55.203 "data_size": 63488 00:25:55.203 }, 00:25:55.203 { 00:25:55.203 "name": null, 00:25:55.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.203 "is_configured": false, 00:25:55.203 "data_offset": 2048, 00:25:55.203 "data_size": 63488 00:25:55.203 }, 00:25:55.203 { 00:25:55.203 "name": "BaseBdev3", 00:25:55.203 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:55.203 "is_configured": true, 00:25:55.203 "data_offset": 2048, 00:25:55.203 "data_size": 63488 00:25:55.203 }, 00:25:55.203 { 00:25:55.203 "name": "BaseBdev4", 00:25:55.203 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:55.203 "is_configured": true, 00:25:55.203 "data_offset": 2048, 00:25:55.203 "data_size": 63488 00:25:55.203 } 00:25:55.203 ] 00:25:55.203 }' 00:25:55.203 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:55.203 13:25:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:55.771 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:55.771 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:55.771 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:55.771 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:55.771 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:55.771 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.771 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.031 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:56.031 "name": "raid_bdev1", 00:25:56.031 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:56.031 "strip_size_kb": 0, 00:25:56.031 "state": "online", 00:25:56.031 "raid_level": "raid1", 00:25:56.031 "superblock": true, 00:25:56.031 "num_base_bdevs": 4, 00:25:56.031 "num_base_bdevs_discovered": 2, 00:25:56.031 "num_base_bdevs_operational": 2, 00:25:56.031 "base_bdevs_list": [ 00:25:56.031 { 00:25:56.031 "name": null, 00:25:56.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.031 "is_configured": false, 00:25:56.031 "data_offset": 2048, 00:25:56.031 "data_size": 63488 00:25:56.031 }, 00:25:56.031 { 00:25:56.031 "name": null, 00:25:56.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.031 "is_configured": false, 00:25:56.031 "data_offset": 2048, 00:25:56.031 "data_size": 63488 00:25:56.031 }, 00:25:56.031 { 00:25:56.031 "name": "BaseBdev3", 00:25:56.031 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:56.031 "is_configured": true, 00:25:56.031 "data_offset": 2048, 00:25:56.031 "data_size": 63488 00:25:56.031 }, 00:25:56.031 { 00:25:56.031 "name": "BaseBdev4", 00:25:56.031 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:56.031 "is_configured": true, 00:25:56.031 "data_offset": 2048, 00:25:56.031 "data_size": 63488 00:25:56.031 } 00:25:56.031 ] 00:25:56.031 }' 00:25:56.031 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:56.031 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:56.031 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:56.031 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:56.031 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:56.290 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:56.550 [2024-07-26 13:25:36.931119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:56.550 [2024-07-26 13:25:36.931175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:56.550 [2024-07-26 13:25:36.931193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf2b2b0 00:25:56.550 [2024-07-26 13:25:36.931205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:56.550 [2024-07-26 13:25:36.931541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:56.550 [2024-07-26 13:25:36.931556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:56.550 [2024-07-26 13:25:36.931620] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:56.550 [2024-07-26 13:25:36.931631] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:56.550 [2024-07-26 13:25:36.931642] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:56.550 BaseBdev1 00:25:56.550 13:25:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:25:57.488 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:25:57.488 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:57.488 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:57.488 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:57.488 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:57.488 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:25:57.488 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.488 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.488 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.488 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.488 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.488 13:25:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:57.747 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.747 "name": "raid_bdev1", 00:25:57.747 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:57.747 "strip_size_kb": 0, 00:25:57.747 "state": "online", 00:25:57.747 "raid_level": "raid1", 00:25:57.747 "superblock": true, 00:25:57.747 "num_base_bdevs": 4, 00:25:57.747 "num_base_bdevs_discovered": 2, 00:25:57.747 "num_base_bdevs_operational": 2, 00:25:57.747 "base_bdevs_list": [ 00:25:57.747 { 00:25:57.747 "name": null, 00:25:57.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.747 "is_configured": false, 00:25:57.747 "data_offset": 2048, 00:25:57.747 "data_size": 63488 00:25:57.747 }, 00:25:57.747 { 00:25:57.747 "name": null, 00:25:57.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.747 "is_configured": false, 00:25:57.747 "data_offset": 2048, 00:25:57.747 "data_size": 63488 00:25:57.747 }, 00:25:57.747 { 00:25:57.747 "name": "BaseBdev3", 00:25:57.747 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:57.747 "is_configured": true, 00:25:57.747 "data_offset": 2048, 00:25:57.747 "data_size": 63488 00:25:57.747 }, 00:25:57.747 { 00:25:57.747 "name": "BaseBdev4", 00:25:57.747 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:57.747 "is_configured": true, 00:25:57.747 "data_offset": 2048, 00:25:57.747 "data_size": 63488 00:25:57.747 } 00:25:57.747 ] 00:25:57.747 }' 00:25:57.747 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.747 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:25:58.314 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:58.314 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:25:58.314 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:25:58.314 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:25:58.314 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:25:58.314 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.314 13:25:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.574 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:25:58.574 "name": "raid_bdev1", 00:25:58.574 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:25:58.574 "strip_size_kb": 0, 00:25:58.574 "state": "online", 00:25:58.574 "raid_level": "raid1", 00:25:58.574 "superblock": true, 00:25:58.574 "num_base_bdevs": 4, 00:25:58.574 "num_base_bdevs_discovered": 2, 00:25:58.574 "num_base_bdevs_operational": 2, 00:25:58.574 "base_bdevs_list": [ 00:25:58.574 { 00:25:58.574 "name": null, 00:25:58.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.574 "is_configured": false, 00:25:58.574 "data_offset": 2048, 00:25:58.574 "data_size": 63488 00:25:58.574 }, 00:25:58.574 { 00:25:58.574 "name": null, 00:25:58.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.574 "is_configured": false, 00:25:58.574 "data_offset": 2048, 00:25:58.574 "data_size": 63488 00:25:58.574 }, 00:25:58.574 { 00:25:58.574 "name": "BaseBdev3", 00:25:58.574 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:25:58.574 "is_configured": true, 00:25:58.574 "data_offset": 2048, 00:25:58.574 "data_size": 63488 00:25:58.574 }, 00:25:58.574 { 00:25:58.574 "name": "BaseBdev4", 00:25:58.574 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:25:58.574 "is_configured": true, 00:25:58.574 "data_offset": 2048, 00:25:58.574 "data_size": 63488 00:25:58.574 } 00:25:58.574 ] 00:25:58.574 }' 00:25:58.574 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:25:58.574 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:25:58.574 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:25:58.574 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:58.574 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:58.574 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:25:58.574 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:58.574 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:58.574 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.574 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:58.833 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.833 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:58.833 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.833 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:25:58.833 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:25:58.834 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:25:58.834 [2024-07-26 13:25:39.305943] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:58.834 [2024-07-26 13:25:39.306063] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:25:58.834 [2024-07-26 13:25:39.306077] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:25:58.834 request: 00:25:58.834 { 00:25:58.834 "base_bdev": "BaseBdev1", 00:25:58.834 "raid_bdev": "raid_bdev1", 00:25:58.834 "method": "bdev_raid_add_base_bdev", 00:25:58.834 "req_id": 1 00:25:58.834 } 00:25:58.834 Got JSON-RPC error response 00:25:58.834 response: 00:25:58.834 { 00:25:58.834 "code": -22, 00:25:58.834 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:25:58.834 } 00:25:58.834 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:25:58.834 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:58.834 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:58.834 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:58.834 13:25:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.212 "name": "raid_bdev1", 00:26:00.212 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:26:00.212 "strip_size_kb": 0, 00:26:00.212 "state": "online", 00:26:00.212 "raid_level": "raid1", 00:26:00.212 "superblock": true, 00:26:00.212 "num_base_bdevs": 4, 00:26:00.212 "num_base_bdevs_discovered": 2, 00:26:00.212 "num_base_bdevs_operational": 2, 00:26:00.212 "base_bdevs_list": [ 00:26:00.212 { 00:26:00.212 "name": null, 00:26:00.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.212 "is_configured": false, 00:26:00.212 "data_offset": 2048, 00:26:00.212 "data_size": 63488 00:26:00.212 }, 00:26:00.212 { 00:26:00.212 "name": null, 00:26:00.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.212 "is_configured": false, 00:26:00.212 "data_offset": 2048, 00:26:00.212 "data_size": 63488 00:26:00.212 }, 00:26:00.212 { 00:26:00.212 "name": "BaseBdev3", 00:26:00.212 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:26:00.212 "is_configured": true, 00:26:00.212 "data_offset": 2048, 00:26:00.212 "data_size": 63488 00:26:00.212 }, 00:26:00.212 { 00:26:00.212 "name": "BaseBdev4", 00:26:00.212 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:26:00.212 "is_configured": true, 00:26:00.212 "data_offset": 2048, 00:26:00.212 "data_size": 63488 00:26:00.212 } 00:26:00.212 ] 00:26:00.212 }' 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.212 13:25:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:00.778 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:00.778 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:00.778 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:00.778 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:00.778 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:00.778 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.778 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:01.038 "name": "raid_bdev1", 00:26:01.038 "uuid": "400e89d7-65f7-4834-bf2c-149a3c5e12d6", 00:26:01.038 "strip_size_kb": 0, 00:26:01.038 "state": "online", 00:26:01.038 "raid_level": "raid1", 00:26:01.038 "superblock": true, 00:26:01.038 "num_base_bdevs": 4, 00:26:01.038 "num_base_bdevs_discovered": 2, 00:26:01.038 "num_base_bdevs_operational": 2, 00:26:01.038 "base_bdevs_list": [ 00:26:01.038 { 00:26:01.038 "name": null, 00:26:01.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.038 "is_configured": false, 00:26:01.038 "data_offset": 2048, 00:26:01.038 "data_size": 63488 00:26:01.038 }, 00:26:01.038 { 00:26:01.038 "name": null, 00:26:01.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.038 "is_configured": false, 00:26:01.038 "data_offset": 2048, 00:26:01.038 "data_size": 63488 00:26:01.038 }, 00:26:01.038 { 00:26:01.038 "name": "BaseBdev3", 00:26:01.038 "uuid": "763b2e9e-65ac-5868-9799-e3d5cbc7c231", 00:26:01.038 "is_configured": true, 00:26:01.038 "data_offset": 2048, 00:26:01.038 "data_size": 63488 00:26:01.038 }, 00:26:01.038 { 00:26:01.038 "name": "BaseBdev4", 00:26:01.038 "uuid": "e0f47178-4338-595d-b7f4-ece1769f240a", 00:26:01.038 "is_configured": true, 00:26:01.038 "data_offset": 2048, 00:26:01.038 "data_size": 63488 00:26:01.038 } 00:26:01.038 ] 00:26:01.038 }' 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 813848 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 813848 ']' 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 813848 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 813848 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 813848' 00:26:01.038 killing process with pid 813848 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 813848 00:26:01.038 Received shutdown signal, test time was about 26.253021 seconds 00:26:01.038 00:26:01.038 Latency(us) 00:26:01.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.038 =================================================================================================================== 00:26:01.038 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:01.038 [2024-07-26 13:25:41.520363] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:01.038 [2024-07-26 13:25:41.520460] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:01.038 [2024-07-26 13:25:41.520516] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:01.038 [2024-07-26 13:25:41.520529] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd770f0 name raid_bdev1, state offline 00:26:01.038 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 813848 00:26:01.038 [2024-07-26 13:25:41.556166] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:01.330 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:26:01.330 00:26:01.330 real 0m31.634s 00:26:01.330 user 0m49.472s 00:26:01.330 sys 0m5.088s 00:26:01.330 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:01.330 13:25:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:26:01.330 ************************************ 00:26:01.330 END TEST raid_rebuild_test_sb_io 00:26:01.330 ************************************ 00:26:01.330 13:25:41 bdev_raid -- bdev/bdev_raid.sh@964 -- # '[' n == y ']' 00:26:01.330 13:25:41 bdev_raid -- bdev/bdev_raid.sh@976 -- # base_blocklen=4096 00:26:01.330 13:25:41 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:26:01.330 13:25:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:01.330 13:25:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:01.330 13:25:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:01.330 ************************************ 00:26:01.330 START TEST raid_state_function_test_sb_4k 00:26:01.330 ************************************ 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=819677 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 819677' 00:26:01.330 Process raid pid: 819677 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 819677 /var/tmp/spdk-raid.sock 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 819677 ']' 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:01.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:01.330 13:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:01.590 [2024-07-26 13:25:41.887709] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:26:01.590 [2024-07-26 13:25:41.887767] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.590 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:01.590 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.591 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:01.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.591 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:01.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.591 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:01.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.591 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:01.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.591 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:01.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.591 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:01.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.591 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:01.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.591 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:01.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:01.591 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:01.591 [2024-07-26 13:25:42.019161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.591 [2024-07-26 13:25:42.106490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.850 [2024-07-26 13:25:42.165944] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:01.850 [2024-07-26 13:25:42.165977] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:02.418 13:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:02.418 13:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:26:02.418 13:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:02.678 [2024-07-26 13:25:42.999902] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:02.678 [2024-07-26 13:25:42.999943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:02.678 [2024-07-26 13:25:42.999954] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:02.678 [2024-07-26 13:25:42.999968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:02.678 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:02.678 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:02.678 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:02.678 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:02.678 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:02.678 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:02.678 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:02.678 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:02.678 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:02.678 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:02.678 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.678 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:02.937 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:02.937 "name": "Existed_Raid", 00:26:02.937 "uuid": "57287798-c45f-4aab-af81-cd26ca205e51", 00:26:02.937 "strip_size_kb": 0, 00:26:02.937 "state": "configuring", 00:26:02.937 "raid_level": "raid1", 00:26:02.937 "superblock": true, 00:26:02.937 "num_base_bdevs": 2, 00:26:02.937 "num_base_bdevs_discovered": 0, 00:26:02.937 "num_base_bdevs_operational": 2, 00:26:02.937 "base_bdevs_list": [ 00:26:02.937 { 00:26:02.937 "name": "BaseBdev1", 00:26:02.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.937 "is_configured": false, 00:26:02.937 "data_offset": 0, 00:26:02.937 "data_size": 0 00:26:02.937 }, 00:26:02.937 { 00:26:02.937 "name": "BaseBdev2", 00:26:02.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:02.937 "is_configured": false, 00:26:02.937 "data_offset": 0, 00:26:02.937 "data_size": 0 00:26:02.937 } 00:26:02.937 ] 00:26:02.937 }' 00:26:02.937 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:02.937 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:03.612 13:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:03.612 [2024-07-26 13:25:44.030487] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:03.612 [2024-07-26 13:25:44.030519] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2771f20 name Existed_Raid, state configuring 00:26:03.612 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:03.872 [2024-07-26 13:25:44.259098] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:03.872 [2024-07-26 13:25:44.259129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:03.872 [2024-07-26 13:25:44.259144] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:03.872 [2024-07-26 13:25:44.259155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:03.872 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:26:04.131 [2024-07-26 13:25:44.493220] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:04.131 BaseBdev1 00:26:04.131 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:04.131 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:04.131 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:04.131 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:26:04.131 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:04.131 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:04.131 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:04.391 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:04.650 [ 00:26:04.650 { 00:26:04.650 "name": "BaseBdev1", 00:26:04.650 "aliases": [ 00:26:04.650 "7022e5f9-4fc8-46b5-bc18-310974cef0f8" 00:26:04.650 ], 00:26:04.650 "product_name": "Malloc disk", 00:26:04.650 "block_size": 4096, 00:26:04.650 "num_blocks": 8192, 00:26:04.650 "uuid": "7022e5f9-4fc8-46b5-bc18-310974cef0f8", 00:26:04.650 "assigned_rate_limits": { 00:26:04.650 "rw_ios_per_sec": 0, 00:26:04.650 "rw_mbytes_per_sec": 0, 00:26:04.650 "r_mbytes_per_sec": 0, 00:26:04.650 "w_mbytes_per_sec": 0 00:26:04.650 }, 00:26:04.650 "claimed": true, 00:26:04.650 "claim_type": "exclusive_write", 00:26:04.650 "zoned": false, 00:26:04.650 "supported_io_types": { 00:26:04.650 "read": true, 00:26:04.650 "write": true, 00:26:04.650 "unmap": true, 00:26:04.650 "flush": true, 00:26:04.650 "reset": true, 00:26:04.650 "nvme_admin": false, 00:26:04.650 "nvme_io": false, 00:26:04.650 "nvme_io_md": false, 00:26:04.650 "write_zeroes": true, 00:26:04.650 "zcopy": true, 00:26:04.650 "get_zone_info": false, 00:26:04.650 "zone_management": false, 00:26:04.650 "zone_append": false, 00:26:04.650 "compare": false, 00:26:04.650 "compare_and_write": false, 00:26:04.650 "abort": true, 00:26:04.650 "seek_hole": false, 00:26:04.650 "seek_data": false, 00:26:04.650 "copy": true, 00:26:04.650 "nvme_iov_md": false 00:26:04.650 }, 00:26:04.650 "memory_domains": [ 00:26:04.650 { 00:26:04.650 "dma_device_id": "system", 00:26:04.650 "dma_device_type": 1 00:26:04.650 }, 00:26:04.650 { 00:26:04.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.650 "dma_device_type": 2 00:26:04.650 } 00:26:04.650 ], 00:26:04.650 "driver_specific": {} 00:26:04.650 } 00:26:04.650 ] 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.650 13:25:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:04.910 13:25:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:04.910 "name": "Existed_Raid", 00:26:04.910 "uuid": "ab2c4b9f-5dba-49ac-bbe8-21410ccfaf61", 00:26:04.910 "strip_size_kb": 0, 00:26:04.910 "state": "configuring", 00:26:04.910 "raid_level": "raid1", 00:26:04.910 "superblock": true, 00:26:04.910 "num_base_bdevs": 2, 00:26:04.910 "num_base_bdevs_discovered": 1, 00:26:04.910 "num_base_bdevs_operational": 2, 00:26:04.910 "base_bdevs_list": [ 00:26:04.910 { 00:26:04.910 "name": "BaseBdev1", 00:26:04.910 "uuid": "7022e5f9-4fc8-46b5-bc18-310974cef0f8", 00:26:04.910 "is_configured": true, 00:26:04.910 "data_offset": 256, 00:26:04.910 "data_size": 7936 00:26:04.910 }, 00:26:04.910 { 00:26:04.910 "name": "BaseBdev2", 00:26:04.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:04.910 "is_configured": false, 00:26:04.910 "data_offset": 0, 00:26:04.910 "data_size": 0 00:26:04.910 } 00:26:04.910 ] 00:26:04.910 }' 00:26:04.910 13:25:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:04.910 13:25:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:05.479 13:25:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:05.479 [2024-07-26 13:25:45.969095] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:05.479 [2024-07-26 13:25:45.969132] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2771810 name Existed_Raid, state configuring 00:26:05.479 13:25:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:05.739 [2024-07-26 13:25:46.193728] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:05.739 [2024-07-26 13:25:46.195161] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:05.739 [2024-07-26 13:25:46.195195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.739 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:05.998 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:05.998 "name": "Existed_Raid", 00:26:05.998 "uuid": "eceaf73d-cca0-4291-a545-2cf0e326a532", 00:26:05.998 "strip_size_kb": 0, 00:26:05.998 "state": "configuring", 00:26:05.998 "raid_level": "raid1", 00:26:05.998 "superblock": true, 00:26:05.998 "num_base_bdevs": 2, 00:26:05.998 "num_base_bdevs_discovered": 1, 00:26:05.998 "num_base_bdevs_operational": 2, 00:26:05.998 "base_bdevs_list": [ 00:26:05.998 { 00:26:05.998 "name": "BaseBdev1", 00:26:05.998 "uuid": "7022e5f9-4fc8-46b5-bc18-310974cef0f8", 00:26:05.998 "is_configured": true, 00:26:05.998 "data_offset": 256, 00:26:05.998 "data_size": 7936 00:26:05.998 }, 00:26:05.998 { 00:26:05.998 "name": "BaseBdev2", 00:26:05.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:05.998 "is_configured": false, 00:26:05.998 "data_offset": 0, 00:26:05.998 "data_size": 0 00:26:05.998 } 00:26:05.998 ] 00:26:05.998 }' 00:26:05.998 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:05.998 13:25:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:06.566 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:26:06.825 [2024-07-26 13:25:47.231567] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:06.825 [2024-07-26 13:25:47.231708] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2772610 00:26:06.825 [2024-07-26 13:25:47.231721] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:06.825 [2024-07-26 13:25:47.231883] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x275e690 00:26:06.825 [2024-07-26 13:25:47.232000] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2772610 00:26:06.825 [2024-07-26 13:25:47.232009] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2772610 00:26:06.825 [2024-07-26 13:25:47.232093] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:06.825 BaseBdev2 00:26:06.825 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:06.825 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:06.825 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:06.825 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:26:06.825 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:06.825 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:06.825 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:07.084 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:07.343 [ 00:26:07.343 { 00:26:07.343 "name": "BaseBdev2", 00:26:07.343 "aliases": [ 00:26:07.343 "a328f05a-1acb-4dc0-94a8-671040e3ddf0" 00:26:07.343 ], 00:26:07.343 "product_name": "Malloc disk", 00:26:07.343 "block_size": 4096, 00:26:07.343 "num_blocks": 8192, 00:26:07.343 "uuid": "a328f05a-1acb-4dc0-94a8-671040e3ddf0", 00:26:07.343 "assigned_rate_limits": { 00:26:07.343 "rw_ios_per_sec": 0, 00:26:07.343 "rw_mbytes_per_sec": 0, 00:26:07.344 "r_mbytes_per_sec": 0, 00:26:07.344 "w_mbytes_per_sec": 0 00:26:07.344 }, 00:26:07.344 "claimed": true, 00:26:07.344 "claim_type": "exclusive_write", 00:26:07.344 "zoned": false, 00:26:07.344 "supported_io_types": { 00:26:07.344 "read": true, 00:26:07.344 "write": true, 00:26:07.344 "unmap": true, 00:26:07.344 "flush": true, 00:26:07.344 "reset": true, 00:26:07.344 "nvme_admin": false, 00:26:07.344 "nvme_io": false, 00:26:07.344 "nvme_io_md": false, 00:26:07.344 "write_zeroes": true, 00:26:07.344 "zcopy": true, 00:26:07.344 "get_zone_info": false, 00:26:07.344 "zone_management": false, 00:26:07.344 "zone_append": false, 00:26:07.344 "compare": false, 00:26:07.344 "compare_and_write": false, 00:26:07.344 "abort": true, 00:26:07.344 "seek_hole": false, 00:26:07.344 "seek_data": false, 00:26:07.344 "copy": true, 00:26:07.344 "nvme_iov_md": false 00:26:07.344 }, 00:26:07.344 "memory_domains": [ 00:26:07.344 { 00:26:07.344 "dma_device_id": "system", 00:26:07.344 "dma_device_type": 1 00:26:07.344 }, 00:26:07.344 { 00:26:07.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:07.344 "dma_device_type": 2 00:26:07.344 } 00:26:07.344 ], 00:26:07.344 "driver_specific": {} 00:26:07.344 } 00:26:07.344 ] 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.344 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.603 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.603 "name": "Existed_Raid", 00:26:07.603 "uuid": "eceaf73d-cca0-4291-a545-2cf0e326a532", 00:26:07.603 "strip_size_kb": 0, 00:26:07.603 "state": "online", 00:26:07.603 "raid_level": "raid1", 00:26:07.603 "superblock": true, 00:26:07.603 "num_base_bdevs": 2, 00:26:07.603 "num_base_bdevs_discovered": 2, 00:26:07.603 "num_base_bdevs_operational": 2, 00:26:07.603 "base_bdevs_list": [ 00:26:07.603 { 00:26:07.603 "name": "BaseBdev1", 00:26:07.603 "uuid": "7022e5f9-4fc8-46b5-bc18-310974cef0f8", 00:26:07.603 "is_configured": true, 00:26:07.603 "data_offset": 256, 00:26:07.603 "data_size": 7936 00:26:07.603 }, 00:26:07.603 { 00:26:07.603 "name": "BaseBdev2", 00:26:07.603 "uuid": "a328f05a-1acb-4dc0-94a8-671040e3ddf0", 00:26:07.603 "is_configured": true, 00:26:07.603 "data_offset": 256, 00:26:07.603 "data_size": 7936 00:26:07.603 } 00:26:07.603 ] 00:26:07.603 }' 00:26:07.603 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.603 13:25:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:08.171 [2024-07-26 13:25:48.619577] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:08.171 "name": "Existed_Raid", 00:26:08.171 "aliases": [ 00:26:08.171 "eceaf73d-cca0-4291-a545-2cf0e326a532" 00:26:08.171 ], 00:26:08.171 "product_name": "Raid Volume", 00:26:08.171 "block_size": 4096, 00:26:08.171 "num_blocks": 7936, 00:26:08.171 "uuid": "eceaf73d-cca0-4291-a545-2cf0e326a532", 00:26:08.171 "assigned_rate_limits": { 00:26:08.171 "rw_ios_per_sec": 0, 00:26:08.171 "rw_mbytes_per_sec": 0, 00:26:08.171 "r_mbytes_per_sec": 0, 00:26:08.171 "w_mbytes_per_sec": 0 00:26:08.171 }, 00:26:08.171 "claimed": false, 00:26:08.171 "zoned": false, 00:26:08.171 "supported_io_types": { 00:26:08.171 "read": true, 00:26:08.171 "write": true, 00:26:08.171 "unmap": false, 00:26:08.171 "flush": false, 00:26:08.171 "reset": true, 00:26:08.171 "nvme_admin": false, 00:26:08.171 "nvme_io": false, 00:26:08.171 "nvme_io_md": false, 00:26:08.171 "write_zeroes": true, 00:26:08.171 "zcopy": false, 00:26:08.171 "get_zone_info": false, 00:26:08.171 "zone_management": false, 00:26:08.171 "zone_append": false, 00:26:08.171 "compare": false, 00:26:08.171 "compare_and_write": false, 00:26:08.171 "abort": false, 00:26:08.171 "seek_hole": false, 00:26:08.171 "seek_data": false, 00:26:08.171 "copy": false, 00:26:08.171 "nvme_iov_md": false 00:26:08.171 }, 00:26:08.171 "memory_domains": [ 00:26:08.171 { 00:26:08.171 "dma_device_id": "system", 00:26:08.171 "dma_device_type": 1 00:26:08.171 }, 00:26:08.171 { 00:26:08.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.171 "dma_device_type": 2 00:26:08.171 }, 00:26:08.171 { 00:26:08.171 "dma_device_id": "system", 00:26:08.171 "dma_device_type": 1 00:26:08.171 }, 00:26:08.171 { 00:26:08.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.171 "dma_device_type": 2 00:26:08.171 } 00:26:08.171 ], 00:26:08.171 "driver_specific": { 00:26:08.171 "raid": { 00:26:08.171 "uuid": "eceaf73d-cca0-4291-a545-2cf0e326a532", 00:26:08.171 "strip_size_kb": 0, 00:26:08.171 "state": "online", 00:26:08.171 "raid_level": "raid1", 00:26:08.171 "superblock": true, 00:26:08.171 "num_base_bdevs": 2, 00:26:08.171 "num_base_bdevs_discovered": 2, 00:26:08.171 "num_base_bdevs_operational": 2, 00:26:08.171 "base_bdevs_list": [ 00:26:08.171 { 00:26:08.171 "name": "BaseBdev1", 00:26:08.171 "uuid": "7022e5f9-4fc8-46b5-bc18-310974cef0f8", 00:26:08.171 "is_configured": true, 00:26:08.171 "data_offset": 256, 00:26:08.171 "data_size": 7936 00:26:08.171 }, 00:26:08.171 { 00:26:08.171 "name": "BaseBdev2", 00:26:08.171 "uuid": "a328f05a-1acb-4dc0-94a8-671040e3ddf0", 00:26:08.171 "is_configured": true, 00:26:08.171 "data_offset": 256, 00:26:08.171 "data_size": 7936 00:26:08.171 } 00:26:08.171 ] 00:26:08.171 } 00:26:08.171 } 00:26:08.171 }' 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:08.171 BaseBdev2' 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:08.171 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:08.431 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:08.431 "name": "BaseBdev1", 00:26:08.431 "aliases": [ 00:26:08.431 "7022e5f9-4fc8-46b5-bc18-310974cef0f8" 00:26:08.431 ], 00:26:08.431 "product_name": "Malloc disk", 00:26:08.431 "block_size": 4096, 00:26:08.431 "num_blocks": 8192, 00:26:08.431 "uuid": "7022e5f9-4fc8-46b5-bc18-310974cef0f8", 00:26:08.431 "assigned_rate_limits": { 00:26:08.431 "rw_ios_per_sec": 0, 00:26:08.431 "rw_mbytes_per_sec": 0, 00:26:08.431 "r_mbytes_per_sec": 0, 00:26:08.431 "w_mbytes_per_sec": 0 00:26:08.431 }, 00:26:08.431 "claimed": true, 00:26:08.431 "claim_type": "exclusive_write", 00:26:08.431 "zoned": false, 00:26:08.431 "supported_io_types": { 00:26:08.431 "read": true, 00:26:08.431 "write": true, 00:26:08.431 "unmap": true, 00:26:08.431 "flush": true, 00:26:08.431 "reset": true, 00:26:08.431 "nvme_admin": false, 00:26:08.431 "nvme_io": false, 00:26:08.431 "nvme_io_md": false, 00:26:08.431 "write_zeroes": true, 00:26:08.431 "zcopy": true, 00:26:08.431 "get_zone_info": false, 00:26:08.431 "zone_management": false, 00:26:08.431 "zone_append": false, 00:26:08.431 "compare": false, 00:26:08.431 "compare_and_write": false, 00:26:08.431 "abort": true, 00:26:08.431 "seek_hole": false, 00:26:08.431 "seek_data": false, 00:26:08.431 "copy": true, 00:26:08.431 "nvme_iov_md": false 00:26:08.431 }, 00:26:08.431 "memory_domains": [ 00:26:08.431 { 00:26:08.431 "dma_device_id": "system", 00:26:08.431 "dma_device_type": 1 00:26:08.431 }, 00:26:08.431 { 00:26:08.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.431 "dma_device_type": 2 00:26:08.431 } 00:26:08.431 ], 00:26:08.431 "driver_specific": {} 00:26:08.431 }' 00:26:08.431 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:08.431 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:08.691 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:08.691 13:25:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:08.691 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:08.691 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:08.691 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:08.691 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:08.691 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:08.691 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:08.691 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:08.691 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:08.691 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:08.691 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:08.691 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:08.950 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:08.950 "name": "BaseBdev2", 00:26:08.950 "aliases": [ 00:26:08.950 "a328f05a-1acb-4dc0-94a8-671040e3ddf0" 00:26:08.950 ], 00:26:08.950 "product_name": "Malloc disk", 00:26:08.950 "block_size": 4096, 00:26:08.950 "num_blocks": 8192, 00:26:08.950 "uuid": "a328f05a-1acb-4dc0-94a8-671040e3ddf0", 00:26:08.950 "assigned_rate_limits": { 00:26:08.950 "rw_ios_per_sec": 0, 00:26:08.950 "rw_mbytes_per_sec": 0, 00:26:08.950 "r_mbytes_per_sec": 0, 00:26:08.950 "w_mbytes_per_sec": 0 00:26:08.950 }, 00:26:08.950 "claimed": true, 00:26:08.950 "claim_type": "exclusive_write", 00:26:08.950 "zoned": false, 00:26:08.950 "supported_io_types": { 00:26:08.950 "read": true, 00:26:08.950 "write": true, 00:26:08.950 "unmap": true, 00:26:08.950 "flush": true, 00:26:08.950 "reset": true, 00:26:08.950 "nvme_admin": false, 00:26:08.950 "nvme_io": false, 00:26:08.950 "nvme_io_md": false, 00:26:08.950 "write_zeroes": true, 00:26:08.950 "zcopy": true, 00:26:08.950 "get_zone_info": false, 00:26:08.950 "zone_management": false, 00:26:08.950 "zone_append": false, 00:26:08.950 "compare": false, 00:26:08.950 "compare_and_write": false, 00:26:08.950 "abort": true, 00:26:08.950 "seek_hole": false, 00:26:08.950 "seek_data": false, 00:26:08.950 "copy": true, 00:26:08.950 "nvme_iov_md": false 00:26:08.950 }, 00:26:08.950 "memory_domains": [ 00:26:08.950 { 00:26:08.950 "dma_device_id": "system", 00:26:08.950 "dma_device_type": 1 00:26:08.950 }, 00:26:08.950 { 00:26:08.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.950 "dma_device_type": 2 00:26:08.950 } 00:26:08.950 ], 00:26:08.950 "driver_specific": {} 00:26:08.950 }' 00:26:08.950 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.210 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:09.210 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:09.210 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.210 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:09.210 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:09.210 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.210 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:09.210 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:09.210 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.210 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:09.468 [2024-07-26 13:25:49.954896] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.468 13:25:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:09.727 13:25:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:09.727 "name": "Existed_Raid", 00:26:09.727 "uuid": "eceaf73d-cca0-4291-a545-2cf0e326a532", 00:26:09.727 "strip_size_kb": 0, 00:26:09.727 "state": "online", 00:26:09.727 "raid_level": "raid1", 00:26:09.727 "superblock": true, 00:26:09.727 "num_base_bdevs": 2, 00:26:09.727 "num_base_bdevs_discovered": 1, 00:26:09.727 "num_base_bdevs_operational": 1, 00:26:09.727 "base_bdevs_list": [ 00:26:09.727 { 00:26:09.727 "name": null, 00:26:09.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.727 "is_configured": false, 00:26:09.727 "data_offset": 256, 00:26:09.727 "data_size": 7936 00:26:09.727 }, 00:26:09.727 { 00:26:09.727 "name": "BaseBdev2", 00:26:09.727 "uuid": "a328f05a-1acb-4dc0-94a8-671040e3ddf0", 00:26:09.727 "is_configured": true, 00:26:09.727 "data_offset": 256, 00:26:09.727 "data_size": 7936 00:26:09.727 } 00:26:09.727 ] 00:26:09.727 }' 00:26:09.727 13:25:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:09.727 13:25:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:10.295 13:25:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:10.295 13:25:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:10.295 13:25:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:10.295 13:25:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.554 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:10.554 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:10.554 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:10.814 [2024-07-26 13:25:51.231248] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:10.814 [2024-07-26 13:25:51.231328] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:10.814 [2024-07-26 13:25:51.241721] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:10.814 [2024-07-26 13:25:51.241752] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:10.814 [2024-07-26 13:25:51.241763] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2772610 name Existed_Raid, state offline 00:26:10.814 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:10.814 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:10.814 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:10.814 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 819677 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 819677 ']' 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 819677 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 819677 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 819677' 00:26:11.073 killing process with pid 819677 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 819677 00:26:11.073 [2024-07-26 13:25:51.521103] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:11.073 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 819677 00:26:11.073 [2024-07-26 13:25:51.521945] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:11.333 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:26:11.333 00:26:11.333 real 0m9.887s 00:26:11.333 user 0m17.602s 00:26:11.333 sys 0m1.860s 00:26:11.333 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:11.333 13:25:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:11.333 ************************************ 00:26:11.333 END TEST raid_state_function_test_sb_4k 00:26:11.333 ************************************ 00:26:11.333 13:25:51 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:26:11.333 13:25:51 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:11.333 13:25:51 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.333 13:25:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:11.333 ************************************ 00:26:11.333 START TEST raid_superblock_test_4k 00:26:11.333 ************************************ 00:26:11.333 13:25:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:26:11.333 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:26:11.333 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:26:11.333 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:26:11.333 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:26:11.333 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:26:11.333 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:26:11.333 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:26:11.333 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@414 -- # local strip_size 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@427 -- # raid_pid=821492 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@428 -- # waitforlisten 821492 /var/tmp/spdk-raid.sock 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 821492 ']' 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:11.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:11.334 13:25:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:11.334 [2024-07-26 13:25:51.854938] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:26:11.334 [2024-07-26 13:25:51.854997] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821492 ] 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:11.593 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:11.593 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:11.593 [2024-07-26 13:25:51.987887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.593 [2024-07-26 13:25:52.074781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.852 [2024-07-26 13:25:52.136255] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:11.853 [2024-07-26 13:25:52.136284] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:12.420 13:25:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:12.420 13:25:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:26:12.420 13:25:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:26:12.420 13:25:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:12.420 13:25:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:26:12.420 13:25:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:26:12.420 13:25:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:12.420 13:25:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:12.420 13:25:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:12.420 13:25:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:12.420 13:25:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:26:12.679 malloc1 00:26:12.679 13:25:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:12.679 [2024-07-26 13:25:53.192762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:12.679 [2024-07-26 13:25:53.192810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.679 [2024-07-26 13:25:53.192827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e8b2f0 00:26:12.679 [2024-07-26 13:25:53.192839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.679 [2024-07-26 13:25:53.194290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.679 [2024-07-26 13:25:53.194319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:12.679 pt1 00:26:12.938 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:12.938 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:12.938 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:26:12.938 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:26:12.938 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:12.938 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:12.938 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:12.938 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:12.938 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:26:12.938 malloc2 00:26:12.938 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:13.197 [2024-07-26 13:25:53.654436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:13.197 [2024-07-26 13:25:53.654477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.197 [2024-07-26 13:25:53.654491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e8c6d0 00:26:13.197 [2024-07-26 13:25:53.654503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.197 [2024-07-26 13:25:53.655833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.197 [2024-07-26 13:25:53.655859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:13.197 pt2 00:26:13.197 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:13.197 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:13.197 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:26:13.456 [2024-07-26 13:25:53.875027] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:13.456 [2024-07-26 13:25:53.876084] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:13.456 [2024-07-26 13:25:53.876207] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2025310 00:26:13.456 [2024-07-26 13:25:53.876220] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:13.456 [2024-07-26 13:25:53.876390] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e84550 00:26:13.456 [2024-07-26 13:25:53.876513] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2025310 00:26:13.456 [2024-07-26 13:25:53.876522] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2025310 00:26:13.456 [2024-07-26 13:25:53.876617] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:13.456 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:13.456 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:13.456 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:13.456 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:13.456 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:13.456 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:13.456 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.456 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.456 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.456 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.456 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.456 13:25:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:13.715 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:13.715 "name": "raid_bdev1", 00:26:13.715 "uuid": "48749589-4d05-4f1a-9971-9f974ac22f37", 00:26:13.715 "strip_size_kb": 0, 00:26:13.715 "state": "online", 00:26:13.715 "raid_level": "raid1", 00:26:13.715 "superblock": true, 00:26:13.715 "num_base_bdevs": 2, 00:26:13.715 "num_base_bdevs_discovered": 2, 00:26:13.715 "num_base_bdevs_operational": 2, 00:26:13.715 "base_bdevs_list": [ 00:26:13.715 { 00:26:13.715 "name": "pt1", 00:26:13.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:13.715 "is_configured": true, 00:26:13.715 "data_offset": 256, 00:26:13.715 "data_size": 7936 00:26:13.715 }, 00:26:13.715 { 00:26:13.715 "name": "pt2", 00:26:13.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:13.715 "is_configured": true, 00:26:13.715 "data_offset": 256, 00:26:13.715 "data_size": 7936 00:26:13.715 } 00:26:13.715 ] 00:26:13.715 }' 00:26:13.715 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:13.715 13:25:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:14.329 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:26:14.329 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:14.329 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:14.329 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:14.329 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:14.329 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:26:14.329 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:14.329 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:14.621 [2024-07-26 13:25:54.897939] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:14.621 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:14.621 "name": "raid_bdev1", 00:26:14.621 "aliases": [ 00:26:14.621 "48749589-4d05-4f1a-9971-9f974ac22f37" 00:26:14.621 ], 00:26:14.621 "product_name": "Raid Volume", 00:26:14.621 "block_size": 4096, 00:26:14.621 "num_blocks": 7936, 00:26:14.621 "uuid": "48749589-4d05-4f1a-9971-9f974ac22f37", 00:26:14.621 "assigned_rate_limits": { 00:26:14.621 "rw_ios_per_sec": 0, 00:26:14.621 "rw_mbytes_per_sec": 0, 00:26:14.621 "r_mbytes_per_sec": 0, 00:26:14.621 "w_mbytes_per_sec": 0 00:26:14.621 }, 00:26:14.621 "claimed": false, 00:26:14.621 "zoned": false, 00:26:14.621 "supported_io_types": { 00:26:14.621 "read": true, 00:26:14.621 "write": true, 00:26:14.621 "unmap": false, 00:26:14.621 "flush": false, 00:26:14.621 "reset": true, 00:26:14.621 "nvme_admin": false, 00:26:14.621 "nvme_io": false, 00:26:14.621 "nvme_io_md": false, 00:26:14.621 "write_zeroes": true, 00:26:14.621 "zcopy": false, 00:26:14.621 "get_zone_info": false, 00:26:14.621 "zone_management": false, 00:26:14.621 "zone_append": false, 00:26:14.621 "compare": false, 00:26:14.621 "compare_and_write": false, 00:26:14.621 "abort": false, 00:26:14.621 "seek_hole": false, 00:26:14.621 "seek_data": false, 00:26:14.621 "copy": false, 00:26:14.621 "nvme_iov_md": false 00:26:14.621 }, 00:26:14.621 "memory_domains": [ 00:26:14.621 { 00:26:14.621 "dma_device_id": "system", 00:26:14.621 "dma_device_type": 1 00:26:14.621 }, 00:26:14.621 { 00:26:14.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.621 "dma_device_type": 2 00:26:14.621 }, 00:26:14.621 { 00:26:14.621 "dma_device_id": "system", 00:26:14.621 "dma_device_type": 1 00:26:14.621 }, 00:26:14.621 { 00:26:14.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.621 "dma_device_type": 2 00:26:14.621 } 00:26:14.621 ], 00:26:14.621 "driver_specific": { 00:26:14.621 "raid": { 00:26:14.621 "uuid": "48749589-4d05-4f1a-9971-9f974ac22f37", 00:26:14.621 "strip_size_kb": 0, 00:26:14.621 "state": "online", 00:26:14.621 "raid_level": "raid1", 00:26:14.621 "superblock": true, 00:26:14.621 "num_base_bdevs": 2, 00:26:14.621 "num_base_bdevs_discovered": 2, 00:26:14.621 "num_base_bdevs_operational": 2, 00:26:14.621 "base_bdevs_list": [ 00:26:14.621 { 00:26:14.621 "name": "pt1", 00:26:14.621 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:14.621 "is_configured": true, 00:26:14.621 "data_offset": 256, 00:26:14.621 "data_size": 7936 00:26:14.621 }, 00:26:14.621 { 00:26:14.621 "name": "pt2", 00:26:14.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:14.621 "is_configured": true, 00:26:14.621 "data_offset": 256, 00:26:14.621 "data_size": 7936 00:26:14.621 } 00:26:14.621 ] 00:26:14.621 } 00:26:14.621 } 00:26:14.621 }' 00:26:14.621 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:14.621 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:14.621 pt2' 00:26:14.621 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:14.621 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:14.621 13:25:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:14.881 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:14.881 "name": "pt1", 00:26:14.881 "aliases": [ 00:26:14.881 "00000000-0000-0000-0000-000000000001" 00:26:14.881 ], 00:26:14.881 "product_name": "passthru", 00:26:14.881 "block_size": 4096, 00:26:14.881 "num_blocks": 8192, 00:26:14.881 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:14.881 "assigned_rate_limits": { 00:26:14.881 "rw_ios_per_sec": 0, 00:26:14.881 "rw_mbytes_per_sec": 0, 00:26:14.881 "r_mbytes_per_sec": 0, 00:26:14.881 "w_mbytes_per_sec": 0 00:26:14.881 }, 00:26:14.881 "claimed": true, 00:26:14.881 "claim_type": "exclusive_write", 00:26:14.881 "zoned": false, 00:26:14.881 "supported_io_types": { 00:26:14.881 "read": true, 00:26:14.881 "write": true, 00:26:14.881 "unmap": true, 00:26:14.881 "flush": true, 00:26:14.881 "reset": true, 00:26:14.881 "nvme_admin": false, 00:26:14.881 "nvme_io": false, 00:26:14.881 "nvme_io_md": false, 00:26:14.881 "write_zeroes": true, 00:26:14.881 "zcopy": true, 00:26:14.881 "get_zone_info": false, 00:26:14.881 "zone_management": false, 00:26:14.881 "zone_append": false, 00:26:14.881 "compare": false, 00:26:14.881 "compare_and_write": false, 00:26:14.881 "abort": true, 00:26:14.881 "seek_hole": false, 00:26:14.881 "seek_data": false, 00:26:14.881 "copy": true, 00:26:14.881 "nvme_iov_md": false 00:26:14.881 }, 00:26:14.881 "memory_domains": [ 00:26:14.881 { 00:26:14.881 "dma_device_id": "system", 00:26:14.881 "dma_device_type": 1 00:26:14.881 }, 00:26:14.881 { 00:26:14.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.881 "dma_device_type": 2 00:26:14.881 } 00:26:14.881 ], 00:26:14.881 "driver_specific": { 00:26:14.881 "passthru": { 00:26:14.881 "name": "pt1", 00:26:14.881 "base_bdev_name": "malloc1" 00:26:14.881 } 00:26:14.881 } 00:26:14.881 }' 00:26:14.881 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:14.881 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:14.881 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:14.881 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:14.881 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:14.881 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:14.881 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:14.881 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:15.140 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:15.140 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:15.140 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:15.140 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:15.140 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:15.140 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:15.140 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:15.398 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:15.398 "name": "pt2", 00:26:15.398 "aliases": [ 00:26:15.398 "00000000-0000-0000-0000-000000000002" 00:26:15.398 ], 00:26:15.398 "product_name": "passthru", 00:26:15.398 "block_size": 4096, 00:26:15.398 "num_blocks": 8192, 00:26:15.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:15.398 "assigned_rate_limits": { 00:26:15.398 "rw_ios_per_sec": 0, 00:26:15.398 "rw_mbytes_per_sec": 0, 00:26:15.398 "r_mbytes_per_sec": 0, 00:26:15.398 "w_mbytes_per_sec": 0 00:26:15.398 }, 00:26:15.398 "claimed": true, 00:26:15.398 "claim_type": "exclusive_write", 00:26:15.399 "zoned": false, 00:26:15.399 "supported_io_types": { 00:26:15.399 "read": true, 00:26:15.399 "write": true, 00:26:15.399 "unmap": true, 00:26:15.399 "flush": true, 00:26:15.399 "reset": true, 00:26:15.399 "nvme_admin": false, 00:26:15.399 "nvme_io": false, 00:26:15.399 "nvme_io_md": false, 00:26:15.399 "write_zeroes": true, 00:26:15.399 "zcopy": true, 00:26:15.399 "get_zone_info": false, 00:26:15.399 "zone_management": false, 00:26:15.399 "zone_append": false, 00:26:15.399 "compare": false, 00:26:15.399 "compare_and_write": false, 00:26:15.399 "abort": true, 00:26:15.399 "seek_hole": false, 00:26:15.399 "seek_data": false, 00:26:15.399 "copy": true, 00:26:15.399 "nvme_iov_md": false 00:26:15.399 }, 00:26:15.399 "memory_domains": [ 00:26:15.399 { 00:26:15.399 "dma_device_id": "system", 00:26:15.399 "dma_device_type": 1 00:26:15.399 }, 00:26:15.399 { 00:26:15.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.399 "dma_device_type": 2 00:26:15.399 } 00:26:15.399 ], 00:26:15.399 "driver_specific": { 00:26:15.399 "passthru": { 00:26:15.399 "name": "pt2", 00:26:15.399 "base_bdev_name": "malloc2" 00:26:15.399 } 00:26:15.399 } 00:26:15.399 }' 00:26:15.399 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:15.399 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:15.399 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:15.399 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:15.399 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:15.399 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:15.399 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:15.658 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:15.658 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:15.658 13:25:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:15.658 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:15.658 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:15.658 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:15.658 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:26:15.918 [2024-07-26 13:25:56.265552] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:15.918 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=48749589-4d05-4f1a-9971-9f974ac22f37 00:26:15.918 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' -z 48749589-4d05-4f1a-9971-9f974ac22f37 ']' 00:26:15.918 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:16.177 [2024-07-26 13:25:56.493914] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:16.177 [2024-07-26 13:25:56.493930] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:16.177 [2024-07-26 13:25:56.493986] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:16.177 [2024-07-26 13:25:56.494037] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:16.177 [2024-07-26 13:25:56.494048] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2025310 name raid_bdev1, state offline 00:26:16.177 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.177 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:26:16.436 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:26:16.436 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:26:16.436 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:16.436 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:16.436 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:16.436 13:25:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:16.696 13:25:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:16.696 13:25:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:16.956 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:26:17.216 [2024-07-26 13:25:57.616832] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:17.216 [2024-07-26 13:25:57.618085] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:17.216 [2024-07-26 13:25:57.618148] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:17.216 [2024-07-26 13:25:57.618194] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:17.216 [2024-07-26 13:25:57.618213] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:17.216 [2024-07-26 13:25:57.618222] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e81ec0 name raid_bdev1, state configuring 00:26:17.216 request: 00:26:17.216 { 00:26:17.216 "name": "raid_bdev1", 00:26:17.216 "raid_level": "raid1", 00:26:17.216 "base_bdevs": [ 00:26:17.216 "malloc1", 00:26:17.216 "malloc2" 00:26:17.216 ], 00:26:17.216 "superblock": false, 00:26:17.216 "method": "bdev_raid_create", 00:26:17.216 "req_id": 1 00:26:17.216 } 00:26:17.216 Got JSON-RPC error response 00:26:17.216 response: 00:26:17.216 { 00:26:17.216 "code": -17, 00:26:17.216 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:17.216 } 00:26:17.216 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:26:17.216 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:17.216 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:17.216 13:25:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:17.216 13:25:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.216 13:25:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:26:17.476 13:25:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:26:17.476 13:25:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:26:17.476 13:25:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:17.735 [2024-07-26 13:25:58.069986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:17.736 [2024-07-26 13:25:58.070039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:17.736 [2024-07-26 13:25:58.070057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x202ed70 00:26:17.736 [2024-07-26 13:25:58.070068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:17.736 [2024-07-26 13:25:58.071561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:17.736 [2024-07-26 13:25:58.071590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:17.736 [2024-07-26 13:25:58.071654] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:17.736 [2024-07-26 13:25:58.071678] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:17.736 pt1 00:26:17.736 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:26:17.736 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:17.736 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:17.736 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:17.736 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:17.736 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:17.736 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.736 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.736 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.736 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.736 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.736 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.995 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:17.995 "name": "raid_bdev1", 00:26:17.995 "uuid": "48749589-4d05-4f1a-9971-9f974ac22f37", 00:26:17.995 "strip_size_kb": 0, 00:26:17.995 "state": "configuring", 00:26:17.995 "raid_level": "raid1", 00:26:17.995 "superblock": true, 00:26:17.995 "num_base_bdevs": 2, 00:26:17.995 "num_base_bdevs_discovered": 1, 00:26:17.995 "num_base_bdevs_operational": 2, 00:26:17.995 "base_bdevs_list": [ 00:26:17.995 { 00:26:17.995 "name": "pt1", 00:26:17.995 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:17.995 "is_configured": true, 00:26:17.995 "data_offset": 256, 00:26:17.995 "data_size": 7936 00:26:17.995 }, 00:26:17.995 { 00:26:17.995 "name": null, 00:26:17.995 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:17.995 "is_configured": false, 00:26:17.995 "data_offset": 256, 00:26:17.995 "data_size": 7936 00:26:17.995 } 00:26:17.995 ] 00:26:17.995 }' 00:26:17.995 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:17.995 13:25:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:18.565 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:26:18.565 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:26:18.565 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:18.565 13:25:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:18.565 [2024-07-26 13:25:59.072644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:18.565 [2024-07-26 13:25:59.072696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:18.565 [2024-07-26 13:25:59.072716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x202e3f0 00:26:18.565 [2024-07-26 13:25:59.072728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:18.565 [2024-07-26 13:25:59.073056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:18.565 [2024-07-26 13:25:59.073073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:18.565 [2024-07-26 13:25:59.073149] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:18.565 [2024-07-26 13:25:59.073170] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:18.565 [2024-07-26 13:25:59.073266] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2025b70 00:26:18.565 [2024-07-26 13:25:59.073276] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:18.565 [2024-07-26 13:25:59.073430] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e82c60 00:26:18.565 [2024-07-26 13:25:59.073556] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2025b70 00:26:18.565 [2024-07-26 13:25:59.073565] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2025b70 00:26:18.565 [2024-07-26 13:25:59.073661] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:18.565 pt2 00:26:18.824 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:18.824 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:18.824 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:18.824 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:18.824 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:18.824 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:18.824 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:18.824 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:18.824 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:18.824 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:18.824 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:18.825 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:18.825 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:18.825 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.825 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:18.825 "name": "raid_bdev1", 00:26:18.825 "uuid": "48749589-4d05-4f1a-9971-9f974ac22f37", 00:26:18.825 "strip_size_kb": 0, 00:26:18.825 "state": "online", 00:26:18.825 "raid_level": "raid1", 00:26:18.825 "superblock": true, 00:26:18.825 "num_base_bdevs": 2, 00:26:18.825 "num_base_bdevs_discovered": 2, 00:26:18.825 "num_base_bdevs_operational": 2, 00:26:18.825 "base_bdevs_list": [ 00:26:18.825 { 00:26:18.825 "name": "pt1", 00:26:18.825 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:18.825 "is_configured": true, 00:26:18.825 "data_offset": 256, 00:26:18.825 "data_size": 7936 00:26:18.825 }, 00:26:18.825 { 00:26:18.825 "name": "pt2", 00:26:18.825 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:18.825 "is_configured": true, 00:26:18.825 "data_offset": 256, 00:26:18.825 "data_size": 7936 00:26:18.825 } 00:26:18.825 ] 00:26:18.825 }' 00:26:18.825 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:18.825 13:25:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:19.762 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:26:19.762 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:19.762 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:19.762 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:19.762 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:19.762 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:26:19.762 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:19.762 13:25:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:19.762 [2024-07-26 13:26:00.183939] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:19.762 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:19.762 "name": "raid_bdev1", 00:26:19.762 "aliases": [ 00:26:19.762 "48749589-4d05-4f1a-9971-9f974ac22f37" 00:26:19.762 ], 00:26:19.762 "product_name": "Raid Volume", 00:26:19.762 "block_size": 4096, 00:26:19.762 "num_blocks": 7936, 00:26:19.762 "uuid": "48749589-4d05-4f1a-9971-9f974ac22f37", 00:26:19.762 "assigned_rate_limits": { 00:26:19.762 "rw_ios_per_sec": 0, 00:26:19.762 "rw_mbytes_per_sec": 0, 00:26:19.762 "r_mbytes_per_sec": 0, 00:26:19.762 "w_mbytes_per_sec": 0 00:26:19.762 }, 00:26:19.762 "claimed": false, 00:26:19.762 "zoned": false, 00:26:19.762 "supported_io_types": { 00:26:19.762 "read": true, 00:26:19.762 "write": true, 00:26:19.762 "unmap": false, 00:26:19.762 "flush": false, 00:26:19.762 "reset": true, 00:26:19.762 "nvme_admin": false, 00:26:19.762 "nvme_io": false, 00:26:19.762 "nvme_io_md": false, 00:26:19.762 "write_zeroes": true, 00:26:19.762 "zcopy": false, 00:26:19.762 "get_zone_info": false, 00:26:19.762 "zone_management": false, 00:26:19.762 "zone_append": false, 00:26:19.762 "compare": false, 00:26:19.762 "compare_and_write": false, 00:26:19.762 "abort": false, 00:26:19.762 "seek_hole": false, 00:26:19.762 "seek_data": false, 00:26:19.762 "copy": false, 00:26:19.762 "nvme_iov_md": false 00:26:19.762 }, 00:26:19.762 "memory_domains": [ 00:26:19.762 { 00:26:19.762 "dma_device_id": "system", 00:26:19.762 "dma_device_type": 1 00:26:19.762 }, 00:26:19.762 { 00:26:19.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:19.762 "dma_device_type": 2 00:26:19.762 }, 00:26:19.762 { 00:26:19.762 "dma_device_id": "system", 00:26:19.762 "dma_device_type": 1 00:26:19.762 }, 00:26:19.762 { 00:26:19.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:19.762 "dma_device_type": 2 00:26:19.762 } 00:26:19.762 ], 00:26:19.762 "driver_specific": { 00:26:19.762 "raid": { 00:26:19.762 "uuid": "48749589-4d05-4f1a-9971-9f974ac22f37", 00:26:19.762 "strip_size_kb": 0, 00:26:19.762 "state": "online", 00:26:19.762 "raid_level": "raid1", 00:26:19.762 "superblock": true, 00:26:19.762 "num_base_bdevs": 2, 00:26:19.762 "num_base_bdevs_discovered": 2, 00:26:19.762 "num_base_bdevs_operational": 2, 00:26:19.762 "base_bdevs_list": [ 00:26:19.762 { 00:26:19.762 "name": "pt1", 00:26:19.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:19.762 "is_configured": true, 00:26:19.762 "data_offset": 256, 00:26:19.762 "data_size": 7936 00:26:19.762 }, 00:26:19.762 { 00:26:19.762 "name": "pt2", 00:26:19.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:19.762 "is_configured": true, 00:26:19.762 "data_offset": 256, 00:26:19.762 "data_size": 7936 00:26:19.762 } 00:26:19.762 ] 00:26:19.762 } 00:26:19.762 } 00:26:19.762 }' 00:26:19.762 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:19.762 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:19.762 pt2' 00:26:19.762 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:19.762 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:19.762 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:20.330 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:20.330 "name": "pt1", 00:26:20.330 "aliases": [ 00:26:20.330 "00000000-0000-0000-0000-000000000001" 00:26:20.330 ], 00:26:20.330 "product_name": "passthru", 00:26:20.330 "block_size": 4096, 00:26:20.330 "num_blocks": 8192, 00:26:20.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:20.330 "assigned_rate_limits": { 00:26:20.330 "rw_ios_per_sec": 0, 00:26:20.330 "rw_mbytes_per_sec": 0, 00:26:20.330 "r_mbytes_per_sec": 0, 00:26:20.330 "w_mbytes_per_sec": 0 00:26:20.330 }, 00:26:20.330 "claimed": true, 00:26:20.330 "claim_type": "exclusive_write", 00:26:20.330 "zoned": false, 00:26:20.330 "supported_io_types": { 00:26:20.330 "read": true, 00:26:20.330 "write": true, 00:26:20.330 "unmap": true, 00:26:20.330 "flush": true, 00:26:20.330 "reset": true, 00:26:20.330 "nvme_admin": false, 00:26:20.330 "nvme_io": false, 00:26:20.330 "nvme_io_md": false, 00:26:20.330 "write_zeroes": true, 00:26:20.330 "zcopy": true, 00:26:20.330 "get_zone_info": false, 00:26:20.330 "zone_management": false, 00:26:20.330 "zone_append": false, 00:26:20.330 "compare": false, 00:26:20.330 "compare_and_write": false, 00:26:20.330 "abort": true, 00:26:20.330 "seek_hole": false, 00:26:20.330 "seek_data": false, 00:26:20.330 "copy": true, 00:26:20.330 "nvme_iov_md": false 00:26:20.330 }, 00:26:20.330 "memory_domains": [ 00:26:20.330 { 00:26:20.330 "dma_device_id": "system", 00:26:20.330 "dma_device_type": 1 00:26:20.330 }, 00:26:20.330 { 00:26:20.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.330 "dma_device_type": 2 00:26:20.330 } 00:26:20.330 ], 00:26:20.330 "driver_specific": { 00:26:20.330 "passthru": { 00:26:20.330 "name": "pt1", 00:26:20.330 "base_bdev_name": "malloc1" 00:26:20.330 } 00:26:20.330 } 00:26:20.330 }' 00:26:20.330 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:20.330 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:20.330 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:20.330 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:20.590 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:20.590 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:20.590 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:20.590 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:20.590 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:20.590 13:26:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:20.590 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:20.590 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:20.590 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:20.590 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:20.590 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:20.849 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:20.849 "name": "pt2", 00:26:20.849 "aliases": [ 00:26:20.849 "00000000-0000-0000-0000-000000000002" 00:26:20.849 ], 00:26:20.849 "product_name": "passthru", 00:26:20.849 "block_size": 4096, 00:26:20.849 "num_blocks": 8192, 00:26:20.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:20.849 "assigned_rate_limits": { 00:26:20.849 "rw_ios_per_sec": 0, 00:26:20.849 "rw_mbytes_per_sec": 0, 00:26:20.849 "r_mbytes_per_sec": 0, 00:26:20.849 "w_mbytes_per_sec": 0 00:26:20.849 }, 00:26:20.849 "claimed": true, 00:26:20.849 "claim_type": "exclusive_write", 00:26:20.849 "zoned": false, 00:26:20.849 "supported_io_types": { 00:26:20.849 "read": true, 00:26:20.849 "write": true, 00:26:20.849 "unmap": true, 00:26:20.849 "flush": true, 00:26:20.849 "reset": true, 00:26:20.849 "nvme_admin": false, 00:26:20.849 "nvme_io": false, 00:26:20.849 "nvme_io_md": false, 00:26:20.849 "write_zeroes": true, 00:26:20.849 "zcopy": true, 00:26:20.849 "get_zone_info": false, 00:26:20.849 "zone_management": false, 00:26:20.849 "zone_append": false, 00:26:20.849 "compare": false, 00:26:20.849 "compare_and_write": false, 00:26:20.849 "abort": true, 00:26:20.849 "seek_hole": false, 00:26:20.849 "seek_data": false, 00:26:20.849 "copy": true, 00:26:20.849 "nvme_iov_md": false 00:26:20.849 }, 00:26:20.849 "memory_domains": [ 00:26:20.849 { 00:26:20.849 "dma_device_id": "system", 00:26:20.849 "dma_device_type": 1 00:26:20.849 }, 00:26:20.849 { 00:26:20.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:20.849 "dma_device_type": 2 00:26:20.849 } 00:26:20.849 ], 00:26:20.849 "driver_specific": { 00:26:20.849 "passthru": { 00:26:20.849 "name": "pt2", 00:26:20.849 "base_bdev_name": "malloc2" 00:26:20.849 } 00:26:20.849 } 00:26:20.849 }' 00:26:20.849 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:20.849 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:20.849 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:26:20.849 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:20.849 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:20.849 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:20.849 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:21.109 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:21.109 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:21.109 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:21.109 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:21.109 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:21.109 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:21.109 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:26:21.368 [2024-07-26 13:26:01.675820] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:21.368 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # '[' 48749589-4d05-4f1a-9971-9f974ac22f37 '!=' 48749589-4d05-4f1a-9971-9f974ac22f37 ']' 00:26:21.368 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:26:21.368 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:21.368 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:26:21.368 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:21.627 [2024-07-26 13:26:01.900218] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:21.627 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:21.627 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:21.627 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:21.627 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:21.627 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:21.627 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:21.627 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:21.627 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:21.627 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:21.627 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:21.627 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.627 13:26:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.886 13:26:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:21.886 "name": "raid_bdev1", 00:26:21.886 "uuid": "48749589-4d05-4f1a-9971-9f974ac22f37", 00:26:21.886 "strip_size_kb": 0, 00:26:21.886 "state": "online", 00:26:21.886 "raid_level": "raid1", 00:26:21.886 "superblock": true, 00:26:21.886 "num_base_bdevs": 2, 00:26:21.886 "num_base_bdevs_discovered": 1, 00:26:21.886 "num_base_bdevs_operational": 1, 00:26:21.886 "base_bdevs_list": [ 00:26:21.886 { 00:26:21.886 "name": null, 00:26:21.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:21.886 "is_configured": false, 00:26:21.886 "data_offset": 256, 00:26:21.886 "data_size": 7936 00:26:21.886 }, 00:26:21.886 { 00:26:21.886 "name": "pt2", 00:26:21.886 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:21.886 "is_configured": true, 00:26:21.886 "data_offset": 256, 00:26:21.886 "data_size": 7936 00:26:21.886 } 00:26:21.886 ] 00:26:21.886 }' 00:26:21.886 13:26:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:21.886 13:26:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:22.457 13:26:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:22.457 [2024-07-26 13:26:02.918890] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:22.457 [2024-07-26 13:26:02.918915] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:22.457 [2024-07-26 13:26:02.918967] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:22.457 [2024-07-26 13:26:02.919008] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:22.457 [2024-07-26 13:26:02.919019] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2025b70 name raid_bdev1, state offline 00:26:22.457 13:26:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.457 13:26:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:26:22.716 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:26:22.716 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:26:22.716 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:22.716 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:22.716 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:22.975 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:22.975 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:22.975 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:26:22.975 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:22.975 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@534 -- # i=1 00:26:22.975 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:23.233 [2024-07-26 13:26:03.596637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:23.233 [2024-07-26 13:26:03.596687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.233 [2024-07-26 13:26:03.596705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e81e00 00:26:23.233 [2024-07-26 13:26:03.596716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.233 [2024-07-26 13:26:03.598255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.233 [2024-07-26 13:26:03.598284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:23.233 [2024-07-26 13:26:03.598350] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:23.233 [2024-07-26 13:26:03.598376] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:23.233 [2024-07-26 13:26:03.598462] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e849e0 00:26:23.233 [2024-07-26 13:26:03.598472] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:23.233 [2024-07-26 13:26:03.598633] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e8c960 00:26:23.233 [2024-07-26 13:26:03.598744] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e849e0 00:26:23.233 [2024-07-26 13:26:03.598753] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e849e0 00:26:23.233 [2024-07-26 13:26:03.598847] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:23.233 pt2 00:26:23.233 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:23.233 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:23.234 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:23.234 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:23.234 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:23.234 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:23.234 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:23.234 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:23.234 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:23.234 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:23.234 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:23.234 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.493 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:23.493 "name": "raid_bdev1", 00:26:23.493 "uuid": "48749589-4d05-4f1a-9971-9f974ac22f37", 00:26:23.493 "strip_size_kb": 0, 00:26:23.493 "state": "online", 00:26:23.493 "raid_level": "raid1", 00:26:23.493 "superblock": true, 00:26:23.493 "num_base_bdevs": 2, 00:26:23.493 "num_base_bdevs_discovered": 1, 00:26:23.493 "num_base_bdevs_operational": 1, 00:26:23.493 "base_bdevs_list": [ 00:26:23.493 { 00:26:23.493 "name": null, 00:26:23.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:23.493 "is_configured": false, 00:26:23.493 "data_offset": 256, 00:26:23.493 "data_size": 7936 00:26:23.493 }, 00:26:23.493 { 00:26:23.493 "name": "pt2", 00:26:23.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:23.493 "is_configured": true, 00:26:23.493 "data_offset": 256, 00:26:23.493 "data_size": 7936 00:26:23.493 } 00:26:23.493 ] 00:26:23.493 }' 00:26:23.493 13:26:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:23.493 13:26:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:24.061 13:26:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:24.321 [2024-07-26 13:26:04.635358] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:24.321 [2024-07-26 13:26:04.635382] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:24.321 [2024-07-26 13:26:04.635438] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:24.321 [2024-07-26 13:26:04.635477] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:24.321 [2024-07-26 13:26:04.635488] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e849e0 name raid_bdev1, state offline 00:26:24.321 13:26:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:26:24.321 13:26:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.580 13:26:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:26:24.580 13:26:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:26:24.580 13:26:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:26:24.580 13:26:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:24.580 [2024-07-26 13:26:05.076511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:24.580 [2024-07-26 13:26:05.076556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:24.580 [2024-07-26 13:26:05.076573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20259d0 00:26:24.580 [2024-07-26 13:26:05.076585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:24.580 [2024-07-26 13:26:05.078090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:24.580 [2024-07-26 13:26:05.078118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:24.580 [2024-07-26 13:26:05.078191] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:24.580 [2024-07-26 13:26:05.078217] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:24.580 [2024-07-26 13:26:05.078313] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:24.580 [2024-07-26 13:26:05.078325] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:24.580 [2024-07-26 13:26:05.078338] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e832c0 name raid_bdev1, state configuring 00:26:24.580 [2024-07-26 13:26:05.078359] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:24.580 [2024-07-26 13:26:05.078412] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e846c0 00:26:24.580 [2024-07-26 13:26:05.078427] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:24.580 [2024-07-26 13:26:05.078583] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e845c0 00:26:24.580 [2024-07-26 13:26:05.078696] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e846c0 00:26:24.581 [2024-07-26 13:26:05.078705] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1e846c0 00:26:24.581 [2024-07-26 13:26:05.078796] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:24.581 pt1 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.581 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:24.839 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:24.840 "name": "raid_bdev1", 00:26:24.840 "uuid": "48749589-4d05-4f1a-9971-9f974ac22f37", 00:26:24.840 "strip_size_kb": 0, 00:26:24.840 "state": "online", 00:26:24.840 "raid_level": "raid1", 00:26:24.840 "superblock": true, 00:26:24.840 "num_base_bdevs": 2, 00:26:24.840 "num_base_bdevs_discovered": 1, 00:26:24.840 "num_base_bdevs_operational": 1, 00:26:24.840 "base_bdevs_list": [ 00:26:24.840 { 00:26:24.840 "name": null, 00:26:24.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:24.840 "is_configured": false, 00:26:24.840 "data_offset": 256, 00:26:24.840 "data_size": 7936 00:26:24.840 }, 00:26:24.840 { 00:26:24.840 "name": "pt2", 00:26:24.840 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:24.840 "is_configured": true, 00:26:24.840 "data_offset": 256, 00:26:24.840 "data_size": 7936 00:26:24.840 } 00:26:24.840 ] 00:26:24.840 }' 00:26:24.840 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:24.840 13:26:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:25.777 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:25.777 13:26:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:25.777 13:26:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:26:25.777 13:26:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:25.777 13:26:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:26:26.036 [2024-07-26 13:26:06.315981] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:26.036 13:26:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # '[' 48749589-4d05-4f1a-9971-9f974ac22f37 '!=' 48749589-4d05-4f1a-9971-9f974ac22f37 ']' 00:26:26.036 13:26:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@578 -- # killprocess 821492 00:26:26.036 13:26:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 821492 ']' 00:26:26.036 13:26:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 821492 00:26:26.036 13:26:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:26:26.036 13:26:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:26.036 13:26:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 821492 00:26:26.036 13:26:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:26.036 13:26:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:26.036 13:26:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 821492' 00:26:26.036 killing process with pid 821492 00:26:26.036 13:26:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 821492 00:26:26.036 [2024-07-26 13:26:06.390904] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:26.036 [2024-07-26 13:26:06.390959] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:26.036 [2024-07-26 13:26:06.390996] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:26.036 [2024-07-26 13:26:06.391007] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e846c0 name raid_bdev1, state offline 00:26:26.036 13:26:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 821492 00:26:26.036 [2024-07-26 13:26:06.406934] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:26.295 13:26:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@580 -- # return 0 00:26:26.295 00:26:26.295 real 0m14.805s 00:26:26.295 user 0m26.834s 00:26:26.295 sys 0m2.675s 00:26:26.295 13:26:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:26.295 13:26:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:26:26.295 ************************************ 00:26:26.295 END TEST raid_superblock_test_4k 00:26:26.295 ************************************ 00:26:26.295 13:26:06 bdev_raid -- bdev/bdev_raid.sh@980 -- # '[' true = true ']' 00:26:26.295 13:26:06 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:26:26.295 13:26:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:26.295 13:26:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:26.295 13:26:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:26.295 ************************************ 00:26:26.295 START TEST raid_rebuild_test_sb_4k 00:26:26.295 ************************************ 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # local verify=true 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # local strip_size 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # local create_arg 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@594 -- # local data_offset 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # raid_pid=824188 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # waitforlisten 824188 /var/tmp/spdk-raid.sock 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 824188 ']' 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:26.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:26.296 13:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:26.296 [2024-07-26 13:26:06.751024] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:26:26.296 [2024-07-26 13:26:06.751083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824188 ] 00:26:26.296 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:26.296 Zero copy mechanism will not be used. 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:26.555 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:26.555 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:26.555 [2024-07-26 13:26:06.885549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.555 [2024-07-26 13:26:06.972615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.556 [2024-07-26 13:26:07.033728] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:26.556 [2024-07-26 13:26:07.033762] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:27.123 13:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:27.123 13:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:26:27.123 13:26:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:27.123 13:26:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:26:27.384 BaseBdev1_malloc 00:26:27.384 13:26:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:27.709 [2024-07-26 13:26:07.929513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:27.709 [2024-07-26 13:26:07.929559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:27.709 [2024-07-26 13:26:07.929579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe4f5f0 00:26:27.709 [2024-07-26 13:26:07.929591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:27.709 [2024-07-26 13:26:07.931043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:27.709 [2024-07-26 13:26:07.931071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:27.709 BaseBdev1 00:26:27.709 13:26:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:26:27.709 13:26:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:26:27.968 BaseBdev2_malloc 00:26:27.968 13:26:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:28.227 [2024-07-26 13:26:08.672048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:28.227 [2024-07-26 13:26:08.672094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:28.227 [2024-07-26 13:26:08.672112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xff3130 00:26:28.228 [2024-07-26 13:26:08.672123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:28.228 [2024-07-26 13:26:08.673557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:28.228 [2024-07-26 13:26:08.673585] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:28.228 BaseBdev2 00:26:28.228 13:26:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:26:28.487 spare_malloc 00:26:28.487 13:26:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:28.748 spare_delay 00:26:28.748 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:29.007 [2024-07-26 13:26:09.322273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:29.007 [2024-07-26 13:26:09.322315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.007 [2024-07-26 13:26:09.322333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xff2770 00:26:29.007 [2024-07-26 13:26:09.322345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.007 [2024-07-26 13:26:09.323720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.007 [2024-07-26 13:26:09.323747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:29.007 spare 00:26:29.007 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:29.267 [2024-07-26 13:26:09.538873] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:29.267 [2024-07-26 13:26:09.540044] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:29.267 [2024-07-26 13:26:09.540189] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xe47270 00:26:29.267 [2024-07-26 13:26:09.540202] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:29.267 [2024-07-26 13:26:09.540393] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xff33c0 00:26:29.267 [2024-07-26 13:26:09.540521] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe47270 00:26:29.267 [2024-07-26 13:26:09.540531] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe47270 00:26:29.267 [2024-07-26 13:26:09.540632] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:29.267 "name": "raid_bdev1", 00:26:29.267 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:29.267 "strip_size_kb": 0, 00:26:29.267 "state": "online", 00:26:29.267 "raid_level": "raid1", 00:26:29.267 "superblock": true, 00:26:29.267 "num_base_bdevs": 2, 00:26:29.267 "num_base_bdevs_discovered": 2, 00:26:29.267 "num_base_bdevs_operational": 2, 00:26:29.267 "base_bdevs_list": [ 00:26:29.267 { 00:26:29.267 "name": "BaseBdev1", 00:26:29.267 "uuid": "c731aae6-b9be-5e62-89c4-016734d9f30c", 00:26:29.267 "is_configured": true, 00:26:29.267 "data_offset": 256, 00:26:29.267 "data_size": 7936 00:26:29.267 }, 00:26:29.267 { 00:26:29.267 "name": "BaseBdev2", 00:26:29.267 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:29.267 "is_configured": true, 00:26:29.267 "data_offset": 256, 00:26:29.267 "data_size": 7936 00:26:29.267 } 00:26:29.267 ] 00:26:29.267 }' 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:29.267 13:26:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:29.835 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:29.835 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:26:30.095 [2024-07-26 13:26:10.477649] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:30.095 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:26:30.095 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.095 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:30.354 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:30.613 [2024-07-26 13:26:10.938675] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfe7290 00:26:30.613 /dev/nbd0 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:30.613 1+0 records in 00:26:30.613 1+0 records out 00:26:30.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229308 s, 17.9 MB/s 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:30.613 13:26:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:26:30.613 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:30.613 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:30.613 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:26:30.613 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:30.613 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:30.613 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:26:30.613 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:26:30.613 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:26:31.181 7936+0 records in 00:26:31.181 7936+0 records out 00:26:31.181 32505856 bytes (33 MB, 31 MiB) copied, 0.677613 s, 48.0 MB/s 00:26:31.181 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:31.182 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:31.182 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:31.182 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:31.182 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:26:31.182 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:31.182 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:31.441 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:31.441 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:31.441 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:31.441 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:31.441 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:31.441 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:31.441 [2024-07-26 13:26:11.942318] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:31.441 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:26:31.441 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:26:31.441 13:26:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:31.701 [2024-07-26 13:26:12.154917] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:31.701 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:31.701 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:31.701 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:31.701 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:31.701 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:31.701 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:31.701 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.701 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.701 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.701 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.701 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.701 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.960 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:31.960 "name": "raid_bdev1", 00:26:31.960 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:31.960 "strip_size_kb": 0, 00:26:31.960 "state": "online", 00:26:31.960 "raid_level": "raid1", 00:26:31.960 "superblock": true, 00:26:31.960 "num_base_bdevs": 2, 00:26:31.960 "num_base_bdevs_discovered": 1, 00:26:31.960 "num_base_bdevs_operational": 1, 00:26:31.960 "base_bdevs_list": [ 00:26:31.960 { 00:26:31.960 "name": null, 00:26:31.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.960 "is_configured": false, 00:26:31.960 "data_offset": 256, 00:26:31.960 "data_size": 7936 00:26:31.960 }, 00:26:31.960 { 00:26:31.960 "name": "BaseBdev2", 00:26:31.960 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:31.960 "is_configured": true, 00:26:31.960 "data_offset": 256, 00:26:31.960 "data_size": 7936 00:26:31.960 } 00:26:31.960 ] 00:26:31.960 }' 00:26:31.960 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:31.960 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:32.527 13:26:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:32.786 [2024-07-26 13:26:13.193654] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:32.786 [2024-07-26 13:26:13.198477] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xff33c0 00:26:32.786 [2024-07-26 13:26:13.200615] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:32.786 13:26:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:33.721 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:33.721 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:33.721 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:33.721 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:33.721 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:33.721 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.721 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:33.980 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:33.980 "name": "raid_bdev1", 00:26:33.980 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:33.980 "strip_size_kb": 0, 00:26:33.980 "state": "online", 00:26:33.980 "raid_level": "raid1", 00:26:33.980 "superblock": true, 00:26:33.980 "num_base_bdevs": 2, 00:26:33.980 "num_base_bdevs_discovered": 2, 00:26:33.980 "num_base_bdevs_operational": 2, 00:26:33.980 "process": { 00:26:33.980 "type": "rebuild", 00:26:33.980 "target": "spare", 00:26:33.980 "progress": { 00:26:33.980 "blocks": 3072, 00:26:33.980 "percent": 38 00:26:33.980 } 00:26:33.980 }, 00:26:33.980 "base_bdevs_list": [ 00:26:33.980 { 00:26:33.980 "name": "spare", 00:26:33.980 "uuid": "d9213f42-6523-5f8b-8efe-3524ffc23aa5", 00:26:33.980 "is_configured": true, 00:26:33.980 "data_offset": 256, 00:26:33.980 "data_size": 7936 00:26:33.980 }, 00:26:33.980 { 00:26:33.980 "name": "BaseBdev2", 00:26:33.980 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:33.980 "is_configured": true, 00:26:33.980 "data_offset": 256, 00:26:33.980 "data_size": 7936 00:26:33.980 } 00:26:33.980 ] 00:26:33.980 }' 00:26:33.980 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:33.980 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:33.980 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:34.239 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:34.239 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:34.239 [2024-07-26 13:26:14.754892] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:34.498 [2024-07-26 13:26:14.812307] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:34.498 [2024-07-26 13:26:14.812358] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:34.499 [2024-07-26 13:26:14.812373] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:34.499 [2024-07-26 13:26:14.812381] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:34.499 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:34.499 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:34.499 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:34.499 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:34.499 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:34.499 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:34.499 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:34.499 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:34.499 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:34.499 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:34.499 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.499 13:26:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.757 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:34.758 "name": "raid_bdev1", 00:26:34.758 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:34.758 "strip_size_kb": 0, 00:26:34.758 "state": "online", 00:26:34.758 "raid_level": "raid1", 00:26:34.758 "superblock": true, 00:26:34.758 "num_base_bdevs": 2, 00:26:34.758 "num_base_bdevs_discovered": 1, 00:26:34.758 "num_base_bdevs_operational": 1, 00:26:34.758 "base_bdevs_list": [ 00:26:34.758 { 00:26:34.758 "name": null, 00:26:34.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:34.758 "is_configured": false, 00:26:34.758 "data_offset": 256, 00:26:34.758 "data_size": 7936 00:26:34.758 }, 00:26:34.758 { 00:26:34.758 "name": "BaseBdev2", 00:26:34.758 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:34.758 "is_configured": true, 00:26:34.758 "data_offset": 256, 00:26:34.758 "data_size": 7936 00:26:34.758 } 00:26:34.758 ] 00:26:34.758 }' 00:26:34.758 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:34.758 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:35.325 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:35.325 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:35.325 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:35.325 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:35.325 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:35.325 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.325 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:35.584 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:35.584 "name": "raid_bdev1", 00:26:35.584 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:35.584 "strip_size_kb": 0, 00:26:35.584 "state": "online", 00:26:35.584 "raid_level": "raid1", 00:26:35.584 "superblock": true, 00:26:35.584 "num_base_bdevs": 2, 00:26:35.584 "num_base_bdevs_discovered": 1, 00:26:35.584 "num_base_bdevs_operational": 1, 00:26:35.584 "base_bdevs_list": [ 00:26:35.584 { 00:26:35.584 "name": null, 00:26:35.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.584 "is_configured": false, 00:26:35.584 "data_offset": 256, 00:26:35.584 "data_size": 7936 00:26:35.584 }, 00:26:35.584 { 00:26:35.584 "name": "BaseBdev2", 00:26:35.584 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:35.584 "is_configured": true, 00:26:35.584 "data_offset": 256, 00:26:35.584 "data_size": 7936 00:26:35.584 } 00:26:35.584 ] 00:26:35.584 }' 00:26:35.584 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:35.584 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:35.584 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:35.584 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:35.584 13:26:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:35.843 [2024-07-26 13:26:16.152035] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:35.843 [2024-07-26 13:26:16.156860] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfe7290 00:26:35.843 [2024-07-26 13:26:16.158234] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:35.843 13:26:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@678 -- # sleep 1 00:26:36.779 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:36.779 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:36.779 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:36.779 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:36.779 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:36.779 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.779 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.038 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:37.039 "name": "raid_bdev1", 00:26:37.039 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:37.039 "strip_size_kb": 0, 00:26:37.039 "state": "online", 00:26:37.039 "raid_level": "raid1", 00:26:37.039 "superblock": true, 00:26:37.039 "num_base_bdevs": 2, 00:26:37.039 "num_base_bdevs_discovered": 2, 00:26:37.039 "num_base_bdevs_operational": 2, 00:26:37.039 "process": { 00:26:37.039 "type": "rebuild", 00:26:37.039 "target": "spare", 00:26:37.039 "progress": { 00:26:37.039 "blocks": 3072, 00:26:37.039 "percent": 38 00:26:37.039 } 00:26:37.039 }, 00:26:37.039 "base_bdevs_list": [ 00:26:37.039 { 00:26:37.039 "name": "spare", 00:26:37.039 "uuid": "d9213f42-6523-5f8b-8efe-3524ffc23aa5", 00:26:37.039 "is_configured": true, 00:26:37.039 "data_offset": 256, 00:26:37.039 "data_size": 7936 00:26:37.039 }, 00:26:37.039 { 00:26:37.039 "name": "BaseBdev2", 00:26:37.039 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:37.039 "is_configured": true, 00:26:37.039 "data_offset": 256, 00:26:37.039 "data_size": 7936 00:26:37.039 } 00:26:37.039 ] 00:26:37.039 }' 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:26:37.039 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # local timeout=963 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.039 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.298 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:37.298 "name": "raid_bdev1", 00:26:37.298 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:37.298 "strip_size_kb": 0, 00:26:37.298 "state": "online", 00:26:37.298 "raid_level": "raid1", 00:26:37.298 "superblock": true, 00:26:37.298 "num_base_bdevs": 2, 00:26:37.298 "num_base_bdevs_discovered": 2, 00:26:37.298 "num_base_bdevs_operational": 2, 00:26:37.298 "process": { 00:26:37.298 "type": "rebuild", 00:26:37.298 "target": "spare", 00:26:37.298 "progress": { 00:26:37.298 "blocks": 3840, 00:26:37.298 "percent": 48 00:26:37.298 } 00:26:37.298 }, 00:26:37.298 "base_bdevs_list": [ 00:26:37.298 { 00:26:37.298 "name": "spare", 00:26:37.298 "uuid": "d9213f42-6523-5f8b-8efe-3524ffc23aa5", 00:26:37.298 "is_configured": true, 00:26:37.298 "data_offset": 256, 00:26:37.298 "data_size": 7936 00:26:37.298 }, 00:26:37.298 { 00:26:37.298 "name": "BaseBdev2", 00:26:37.298 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:37.298 "is_configured": true, 00:26:37.298 "data_offset": 256, 00:26:37.298 "data_size": 7936 00:26:37.298 } 00:26:37.298 ] 00:26:37.298 }' 00:26:37.298 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:37.298 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:37.298 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:37.298 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:37.298 13:26:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:38.677 13:26:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:38.677 13:26:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:38.677 13:26:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:38.677 13:26:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:38.677 13:26:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:38.677 13:26:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:38.677 13:26:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.677 13:26:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.677 13:26:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:38.677 "name": "raid_bdev1", 00:26:38.677 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:38.677 "strip_size_kb": 0, 00:26:38.677 "state": "online", 00:26:38.677 "raid_level": "raid1", 00:26:38.677 "superblock": true, 00:26:38.677 "num_base_bdevs": 2, 00:26:38.677 "num_base_bdevs_discovered": 2, 00:26:38.677 "num_base_bdevs_operational": 2, 00:26:38.677 "process": { 00:26:38.677 "type": "rebuild", 00:26:38.677 "target": "spare", 00:26:38.677 "progress": { 00:26:38.677 "blocks": 7168, 00:26:38.677 "percent": 90 00:26:38.677 } 00:26:38.677 }, 00:26:38.677 "base_bdevs_list": [ 00:26:38.677 { 00:26:38.677 "name": "spare", 00:26:38.677 "uuid": "d9213f42-6523-5f8b-8efe-3524ffc23aa5", 00:26:38.677 "is_configured": true, 00:26:38.677 "data_offset": 256, 00:26:38.677 "data_size": 7936 00:26:38.677 }, 00:26:38.677 { 00:26:38.677 "name": "BaseBdev2", 00:26:38.677 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:38.677 "is_configured": true, 00:26:38.677 "data_offset": 256, 00:26:38.677 "data_size": 7936 00:26:38.677 } 00:26:38.677 ] 00:26:38.677 }' 00:26:38.677 13:26:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:38.677 13:26:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:38.677 13:26:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:38.677 13:26:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:38.677 13:26:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:26:38.936 [2024-07-26 13:26:19.280753] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:38.936 [2024-07-26 13:26:19.280807] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:38.936 [2024-07-26 13:26:19.280888] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.873 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:26:39.873 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:39.873 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:39.873 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:39.873 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:39.873 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:39.873 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.873 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.873 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:39.873 "name": "raid_bdev1", 00:26:39.873 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:39.873 "strip_size_kb": 0, 00:26:39.873 "state": "online", 00:26:39.873 "raid_level": "raid1", 00:26:39.873 "superblock": true, 00:26:39.873 "num_base_bdevs": 2, 00:26:39.873 "num_base_bdevs_discovered": 2, 00:26:39.873 "num_base_bdevs_operational": 2, 00:26:39.873 "base_bdevs_list": [ 00:26:39.873 { 00:26:39.873 "name": "spare", 00:26:39.873 "uuid": "d9213f42-6523-5f8b-8efe-3524ffc23aa5", 00:26:39.873 "is_configured": true, 00:26:39.873 "data_offset": 256, 00:26:39.873 "data_size": 7936 00:26:39.873 }, 00:26:39.873 { 00:26:39.873 "name": "BaseBdev2", 00:26:39.873 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:39.873 "is_configured": true, 00:26:39.873 "data_offset": 256, 00:26:39.873 "data_size": 7936 00:26:39.873 } 00:26:39.873 ] 00:26:39.873 }' 00:26:39.873 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:40.132 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:40.132 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:40.132 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:40.132 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@724 -- # break 00:26:40.132 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:40.132 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:40.132 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:40.132 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:40.132 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:40.132 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.132 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:40.391 "name": "raid_bdev1", 00:26:40.391 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:40.391 "strip_size_kb": 0, 00:26:40.391 "state": "online", 00:26:40.391 "raid_level": "raid1", 00:26:40.391 "superblock": true, 00:26:40.391 "num_base_bdevs": 2, 00:26:40.391 "num_base_bdevs_discovered": 2, 00:26:40.391 "num_base_bdevs_operational": 2, 00:26:40.391 "base_bdevs_list": [ 00:26:40.391 { 00:26:40.391 "name": "spare", 00:26:40.391 "uuid": "d9213f42-6523-5f8b-8efe-3524ffc23aa5", 00:26:40.391 "is_configured": true, 00:26:40.391 "data_offset": 256, 00:26:40.391 "data_size": 7936 00:26:40.391 }, 00:26:40.391 { 00:26:40.391 "name": "BaseBdev2", 00:26:40.391 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:40.391 "is_configured": true, 00:26:40.391 "data_offset": 256, 00:26:40.391 "data_size": 7936 00:26:40.391 } 00:26:40.391 ] 00:26:40.391 }' 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:40.391 13:26:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:40.651 13:26:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:40.651 "name": "raid_bdev1", 00:26:40.651 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:40.651 "strip_size_kb": 0, 00:26:40.651 "state": "online", 00:26:40.651 "raid_level": "raid1", 00:26:40.651 "superblock": true, 00:26:40.651 "num_base_bdevs": 2, 00:26:40.651 "num_base_bdevs_discovered": 2, 00:26:40.651 "num_base_bdevs_operational": 2, 00:26:40.651 "base_bdevs_list": [ 00:26:40.651 { 00:26:40.651 "name": "spare", 00:26:40.651 "uuid": "d9213f42-6523-5f8b-8efe-3524ffc23aa5", 00:26:40.651 "is_configured": true, 00:26:40.651 "data_offset": 256, 00:26:40.651 "data_size": 7936 00:26:40.651 }, 00:26:40.651 { 00:26:40.651 "name": "BaseBdev2", 00:26:40.651 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:40.651 "is_configured": true, 00:26:40.651 "data_offset": 256, 00:26:40.651 "data_size": 7936 00:26:40.651 } 00:26:40.651 ] 00:26:40.651 }' 00:26:40.651 13:26:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:40.651 13:26:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:41.221 13:26:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:41.537 [2024-07-26 13:26:21.787602] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:41.537 [2024-07-26 13:26:21.787628] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:41.537 [2024-07-26 13:26:21.787687] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:41.537 [2024-07-26 13:26:21.787742] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:41.537 [2024-07-26 13:26:21.787753] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe47270 name raid_bdev1, state offline 00:26:41.537 13:26:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:41.537 13:26:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # jq length 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:41.537 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:41.797 /dev/nbd0 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:41.797 1+0 records in 00:26:41.797 1+0 records out 00:26:41.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023886 s, 17.1 MB/s 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:41.797 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:42.057 /dev/nbd1 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:42.057 1+0 records in 00:26:42.057 1+0 records out 00:26:42.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354495 s, 11.6 MB/s 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:42.057 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:42.316 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:42.316 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:42.316 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:42.316 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:42.316 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:42.316 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:42.316 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:26:42.316 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:26:42.316 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:42.316 13:26:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:42.576 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:42.576 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:42.576 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:42.576 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:42.576 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:42.576 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:42.576 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:26:42.576 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:26:42.576 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:26:42.576 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:42.835 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:43.095 [2024-07-26 13:26:23.394262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:43.095 [2024-07-26 13:26:23.394307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:43.095 [2024-07-26 13:26:23.394326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe480a0 00:26:43.095 [2024-07-26 13:26:23.394337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:43.095 [2024-07-26 13:26:23.395862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:43.095 [2024-07-26 13:26:23.395891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:43.095 [2024-07-26 13:26:23.395967] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:43.095 [2024-07-26 13:26:23.395993] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:43.095 [2024-07-26 13:26:23.396087] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:43.095 spare 00:26:43.095 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:43.095 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:43.095 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:43.095 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:43.095 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:43.095 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:43.095 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:43.095 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:43.095 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:43.095 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:43.095 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.095 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.095 [2024-07-26 13:26:23.496406] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xe485c0 00:26:43.095 [2024-07-26 13:26:23.496419] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:26:43.095 [2024-07-26 13:26:23.496604] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfe7fe0 00:26:43.095 [2024-07-26 13:26:23.496737] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe485c0 00:26:43.095 [2024-07-26 13:26:23.496746] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe485c0 00:26:43.095 [2024-07-26 13:26:23.496842] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:43.354 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:43.354 "name": "raid_bdev1", 00:26:43.354 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:43.354 "strip_size_kb": 0, 00:26:43.354 "state": "online", 00:26:43.354 "raid_level": "raid1", 00:26:43.354 "superblock": true, 00:26:43.354 "num_base_bdevs": 2, 00:26:43.354 "num_base_bdevs_discovered": 2, 00:26:43.354 "num_base_bdevs_operational": 2, 00:26:43.354 "base_bdevs_list": [ 00:26:43.354 { 00:26:43.354 "name": "spare", 00:26:43.354 "uuid": "d9213f42-6523-5f8b-8efe-3524ffc23aa5", 00:26:43.354 "is_configured": true, 00:26:43.354 "data_offset": 256, 00:26:43.354 "data_size": 7936 00:26:43.354 }, 00:26:43.354 { 00:26:43.354 "name": "BaseBdev2", 00:26:43.354 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:43.354 "is_configured": true, 00:26:43.354 "data_offset": 256, 00:26:43.354 "data_size": 7936 00:26:43.354 } 00:26:43.354 ] 00:26:43.354 }' 00:26:43.354 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:43.354 13:26:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:43.922 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:43.922 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:43.922 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:43.922 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:43.922 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:43.922 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.922 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.922 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:43.922 "name": "raid_bdev1", 00:26:43.922 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:43.922 "strip_size_kb": 0, 00:26:43.922 "state": "online", 00:26:43.922 "raid_level": "raid1", 00:26:43.922 "superblock": true, 00:26:43.922 "num_base_bdevs": 2, 00:26:43.922 "num_base_bdevs_discovered": 2, 00:26:43.922 "num_base_bdevs_operational": 2, 00:26:43.922 "base_bdevs_list": [ 00:26:43.922 { 00:26:43.922 "name": "spare", 00:26:43.922 "uuid": "d9213f42-6523-5f8b-8efe-3524ffc23aa5", 00:26:43.922 "is_configured": true, 00:26:43.922 "data_offset": 256, 00:26:43.922 "data_size": 7936 00:26:43.922 }, 00:26:43.922 { 00:26:43.922 "name": "BaseBdev2", 00:26:43.922 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:43.922 "is_configured": true, 00:26:43.922 "data_offset": 256, 00:26:43.922 "data_size": 7936 00:26:43.922 } 00:26:43.922 ] 00:26:43.922 }' 00:26:43.922 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:44.181 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:44.181 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:44.181 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:44.181 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.181 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:44.181 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:26:44.181 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:44.441 [2024-07-26 13:26:24.862235] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:44.441 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:44.441 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:44.441 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:44.441 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:44.441 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:44.441 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:44.441 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.441 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.441 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.441 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.441 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.441 13:26:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.700 13:26:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:44.700 "name": "raid_bdev1", 00:26:44.700 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:44.700 "strip_size_kb": 0, 00:26:44.700 "state": "online", 00:26:44.700 "raid_level": "raid1", 00:26:44.700 "superblock": true, 00:26:44.700 "num_base_bdevs": 2, 00:26:44.700 "num_base_bdevs_discovered": 1, 00:26:44.700 "num_base_bdevs_operational": 1, 00:26:44.700 "base_bdevs_list": [ 00:26:44.701 { 00:26:44.701 "name": null, 00:26:44.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.701 "is_configured": false, 00:26:44.701 "data_offset": 256, 00:26:44.701 "data_size": 7936 00:26:44.701 }, 00:26:44.701 { 00:26:44.701 "name": "BaseBdev2", 00:26:44.701 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:44.701 "is_configured": true, 00:26:44.701 "data_offset": 256, 00:26:44.701 "data_size": 7936 00:26:44.701 } 00:26:44.701 ] 00:26:44.701 }' 00:26:44.701 13:26:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:44.701 13:26:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:45.270 13:26:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:45.530 [2024-07-26 13:26:25.900978] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:45.530 [2024-07-26 13:26:25.901120] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:45.530 [2024-07-26 13:26:25.901136] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:45.530 [2024-07-26 13:26:25.901169] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:45.530 [2024-07-26 13:26:25.905821] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe48d80 00:26:45.530 [2024-07-26 13:26:25.907880] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:45.530 13:26:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # sleep 1 00:26:46.469 13:26:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:46.469 13:26:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:46.469 13:26:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:46.469 13:26:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:46.469 13:26:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:46.469 13:26:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.469 13:26:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.728 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:46.728 "name": "raid_bdev1", 00:26:46.728 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:46.728 "strip_size_kb": 0, 00:26:46.728 "state": "online", 00:26:46.728 "raid_level": "raid1", 00:26:46.729 "superblock": true, 00:26:46.729 "num_base_bdevs": 2, 00:26:46.729 "num_base_bdevs_discovered": 2, 00:26:46.729 "num_base_bdevs_operational": 2, 00:26:46.729 "process": { 00:26:46.729 "type": "rebuild", 00:26:46.729 "target": "spare", 00:26:46.729 "progress": { 00:26:46.729 "blocks": 3072, 00:26:46.729 "percent": 38 00:26:46.729 } 00:26:46.729 }, 00:26:46.729 "base_bdevs_list": [ 00:26:46.729 { 00:26:46.729 "name": "spare", 00:26:46.729 "uuid": "d9213f42-6523-5f8b-8efe-3524ffc23aa5", 00:26:46.729 "is_configured": true, 00:26:46.729 "data_offset": 256, 00:26:46.729 "data_size": 7936 00:26:46.729 }, 00:26:46.729 { 00:26:46.729 "name": "BaseBdev2", 00:26:46.729 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:46.729 "is_configured": true, 00:26:46.729 "data_offset": 256, 00:26:46.729 "data_size": 7936 00:26:46.729 } 00:26:46.729 ] 00:26:46.729 }' 00:26:46.729 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:46.729 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:46.729 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:46.729 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:46.729 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:46.988 [2024-07-26 13:26:27.454687] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:47.248 [2024-07-26 13:26:27.519674] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:47.248 [2024-07-26 13:26:27.519716] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:47.248 [2024-07-26 13:26:27.519731] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:47.248 [2024-07-26 13:26:27.519738] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:47.248 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:47.248 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:47.248 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:47.248 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:47.248 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:47.248 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:47.248 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:47.248 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:47.248 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:47.248 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:47.248 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.248 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.507 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:47.507 "name": "raid_bdev1", 00:26:47.507 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:47.507 "strip_size_kb": 0, 00:26:47.507 "state": "online", 00:26:47.507 "raid_level": "raid1", 00:26:47.507 "superblock": true, 00:26:47.507 "num_base_bdevs": 2, 00:26:47.507 "num_base_bdevs_discovered": 1, 00:26:47.507 "num_base_bdevs_operational": 1, 00:26:47.507 "base_bdevs_list": [ 00:26:47.507 { 00:26:47.507 "name": null, 00:26:47.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.507 "is_configured": false, 00:26:47.507 "data_offset": 256, 00:26:47.507 "data_size": 7936 00:26:47.507 }, 00:26:47.507 { 00:26:47.507 "name": "BaseBdev2", 00:26:47.507 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:47.507 "is_configured": true, 00:26:47.507 "data_offset": 256, 00:26:47.507 "data_size": 7936 00:26:47.507 } 00:26:47.507 ] 00:26:47.507 }' 00:26:47.507 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:47.507 13:26:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:48.076 13:26:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:48.077 [2024-07-26 13:26:28.562623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:48.077 [2024-07-26 13:26:28.562673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:48.077 [2024-07-26 13:26:28.562696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe4e140 00:26:48.077 [2024-07-26 13:26:28.562707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:48.077 [2024-07-26 13:26:28.563063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:48.077 [2024-07-26 13:26:28.563080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:48.077 [2024-07-26 13:26:28.563164] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:26:48.077 [2024-07-26 13:26:28.563176] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:26:48.077 [2024-07-26 13:26:28.563186] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:26:48.077 [2024-07-26 13:26:28.563205] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:48.077 [2024-07-26 13:26:28.567834] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe48d80 00:26:48.077 spare 00:26:48.077 [2024-07-26 13:26:28.569188] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:48.077 13:26:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # sleep 1 00:26:49.458 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:49.458 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:49.458 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:49.458 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:49.458 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:49.459 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.459 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.459 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:49.459 "name": "raid_bdev1", 00:26:49.459 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:49.459 "strip_size_kb": 0, 00:26:49.459 "state": "online", 00:26:49.459 "raid_level": "raid1", 00:26:49.459 "superblock": true, 00:26:49.459 "num_base_bdevs": 2, 00:26:49.459 "num_base_bdevs_discovered": 2, 00:26:49.459 "num_base_bdevs_operational": 2, 00:26:49.459 "process": { 00:26:49.459 "type": "rebuild", 00:26:49.459 "target": "spare", 00:26:49.459 "progress": { 00:26:49.459 "blocks": 3072, 00:26:49.459 "percent": 38 00:26:49.459 } 00:26:49.459 }, 00:26:49.459 "base_bdevs_list": [ 00:26:49.459 { 00:26:49.459 "name": "spare", 00:26:49.459 "uuid": "d9213f42-6523-5f8b-8efe-3524ffc23aa5", 00:26:49.459 "is_configured": true, 00:26:49.459 "data_offset": 256, 00:26:49.459 "data_size": 7936 00:26:49.459 }, 00:26:49.459 { 00:26:49.459 "name": "BaseBdev2", 00:26:49.459 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:49.459 "is_configured": true, 00:26:49.459 "data_offset": 256, 00:26:49.459 "data_size": 7936 00:26:49.459 } 00:26:49.459 ] 00:26:49.459 }' 00:26:49.459 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:49.459 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:49.459 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:49.459 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:49.459 13:26:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:49.719 [2024-07-26 13:26:30.124396] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:49.719 [2024-07-26 13:26:30.180930] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:49.719 [2024-07-26 13:26:30.180972] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:49.719 [2024-07-26 13:26:30.180987] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:49.719 [2024-07-26 13:26:30.180994] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:49.719 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:49.719 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:49.719 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:49.719 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:49.719 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:49.719 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:49.719 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:49.719 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:49.719 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:49.719 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:49.719 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.719 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:49.979 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:49.979 "name": "raid_bdev1", 00:26:49.979 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:49.979 "strip_size_kb": 0, 00:26:49.979 "state": "online", 00:26:49.979 "raid_level": "raid1", 00:26:49.979 "superblock": true, 00:26:49.979 "num_base_bdevs": 2, 00:26:49.979 "num_base_bdevs_discovered": 1, 00:26:49.979 "num_base_bdevs_operational": 1, 00:26:49.979 "base_bdevs_list": [ 00:26:49.979 { 00:26:49.979 "name": null, 00:26:49.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:49.979 "is_configured": false, 00:26:49.979 "data_offset": 256, 00:26:49.979 "data_size": 7936 00:26:49.979 }, 00:26:49.979 { 00:26:49.979 "name": "BaseBdev2", 00:26:49.979 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:49.979 "is_configured": true, 00:26:49.979 "data_offset": 256, 00:26:49.979 "data_size": 7936 00:26:49.979 } 00:26:49.979 ] 00:26:49.979 }' 00:26:49.979 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:49.979 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:50.548 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:50.548 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:50.548 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:50.548 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:50.548 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:50.548 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.548 13:26:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.807 13:26:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:50.807 "name": "raid_bdev1", 00:26:50.807 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:50.807 "strip_size_kb": 0, 00:26:50.807 "state": "online", 00:26:50.807 "raid_level": "raid1", 00:26:50.807 "superblock": true, 00:26:50.807 "num_base_bdevs": 2, 00:26:50.807 "num_base_bdevs_discovered": 1, 00:26:50.807 "num_base_bdevs_operational": 1, 00:26:50.807 "base_bdevs_list": [ 00:26:50.807 { 00:26:50.807 "name": null, 00:26:50.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.807 "is_configured": false, 00:26:50.807 "data_offset": 256, 00:26:50.807 "data_size": 7936 00:26:50.807 }, 00:26:50.807 { 00:26:50.807 "name": "BaseBdev2", 00:26:50.807 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:50.807 "is_configured": true, 00:26:50.807 "data_offset": 256, 00:26:50.807 "data_size": 7936 00:26:50.807 } 00:26:50.807 ] 00:26:50.807 }' 00:26:50.807 13:26:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:50.807 13:26:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:50.807 13:26:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:50.807 13:26:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:50.807 13:26:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:51.067 13:26:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:51.326 [2024-07-26 13:26:31.745417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:51.326 [2024-07-26 13:26:31.745461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:51.326 [2024-07-26 13:26:31.745481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe49ba0 00:26:51.326 [2024-07-26 13:26:31.745493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:51.326 [2024-07-26 13:26:31.745818] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:51.326 [2024-07-26 13:26:31.745835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:51.326 [2024-07-26 13:26:31.745894] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:51.326 [2024-07-26 13:26:31.745906] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:51.326 [2024-07-26 13:26:31.745916] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:51.326 BaseBdev1 00:26:51.326 13:26:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@789 -- # sleep 1 00:26:52.263 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:52.263 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:52.263 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:52.263 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:52.263 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:52.263 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:52.263 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:52.263 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:52.264 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:52.264 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:52.264 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.264 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.523 13:26:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:52.523 "name": "raid_bdev1", 00:26:52.523 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:52.523 "strip_size_kb": 0, 00:26:52.523 "state": "online", 00:26:52.523 "raid_level": "raid1", 00:26:52.523 "superblock": true, 00:26:52.523 "num_base_bdevs": 2, 00:26:52.523 "num_base_bdevs_discovered": 1, 00:26:52.523 "num_base_bdevs_operational": 1, 00:26:52.523 "base_bdevs_list": [ 00:26:52.523 { 00:26:52.523 "name": null, 00:26:52.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.523 "is_configured": false, 00:26:52.523 "data_offset": 256, 00:26:52.523 "data_size": 7936 00:26:52.523 }, 00:26:52.523 { 00:26:52.523 "name": "BaseBdev2", 00:26:52.523 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:52.523 "is_configured": true, 00:26:52.523 "data_offset": 256, 00:26:52.523 "data_size": 7936 00:26:52.523 } 00:26:52.523 ] 00:26:52.523 }' 00:26:52.523 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:52.523 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:53.093 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:53.093 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:53.093 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:53.093 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:53.093 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:53.093 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:53.093 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.352 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:53.352 "name": "raid_bdev1", 00:26:53.352 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:53.352 "strip_size_kb": 0, 00:26:53.352 "state": "online", 00:26:53.352 "raid_level": "raid1", 00:26:53.352 "superblock": true, 00:26:53.352 "num_base_bdevs": 2, 00:26:53.352 "num_base_bdevs_discovered": 1, 00:26:53.352 "num_base_bdevs_operational": 1, 00:26:53.352 "base_bdevs_list": [ 00:26:53.352 { 00:26:53.352 "name": null, 00:26:53.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:53.352 "is_configured": false, 00:26:53.352 "data_offset": 256, 00:26:53.352 "data_size": 7936 00:26:53.352 }, 00:26:53.352 { 00:26:53.352 "name": "BaseBdev2", 00:26:53.352 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:53.352 "is_configured": true, 00:26:53.352 "data_offset": 256, 00:26:53.352 "data_size": 7936 00:26:53.352 } 00:26:53.352 ] 00:26:53.352 }' 00:26:53.352 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:53.352 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:53.352 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:53.611 13:26:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:26:53.611 [2024-07-26 13:26:34.099705] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:53.611 [2024-07-26 13:26:34.099828] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:26:53.611 [2024-07-26 13:26:34.099842] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:26:53.611 request: 00:26:53.611 { 00:26:53.611 "base_bdev": "BaseBdev1", 00:26:53.611 "raid_bdev": "raid_bdev1", 00:26:53.611 "method": "bdev_raid_add_base_bdev", 00:26:53.611 "req_id": 1 00:26:53.611 } 00:26:53.611 Got JSON-RPC error response 00:26:53.611 response: 00:26:53.611 { 00:26:53.611 "code": -22, 00:26:53.611 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:26:53.611 } 00:26:53.611 13:26:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:26:53.611 13:26:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:53.611 13:26:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:53.611 13:26:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:53.611 13:26:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@793 -- # sleep 1 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:54.991 "name": "raid_bdev1", 00:26:54.991 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:54.991 "strip_size_kb": 0, 00:26:54.991 "state": "online", 00:26:54.991 "raid_level": "raid1", 00:26:54.991 "superblock": true, 00:26:54.991 "num_base_bdevs": 2, 00:26:54.991 "num_base_bdevs_discovered": 1, 00:26:54.991 "num_base_bdevs_operational": 1, 00:26:54.991 "base_bdevs_list": [ 00:26:54.991 { 00:26:54.991 "name": null, 00:26:54.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:54.991 "is_configured": false, 00:26:54.991 "data_offset": 256, 00:26:54.991 "data_size": 7936 00:26:54.991 }, 00:26:54.991 { 00:26:54.991 "name": "BaseBdev2", 00:26:54.991 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:54.991 "is_configured": true, 00:26:54.991 "data_offset": 256, 00:26:54.991 "data_size": 7936 00:26:54.991 } 00:26:54.991 ] 00:26:54.991 }' 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:54.991 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:55.600 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:55.601 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:55.601 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:55.601 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:55.601 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:55.601 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.601 13:26:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:55.860 "name": "raid_bdev1", 00:26:55.860 "uuid": "7fe1ee7c-9384-4d3b-8dfc-e48f0f8e17a1", 00:26:55.860 "strip_size_kb": 0, 00:26:55.860 "state": "online", 00:26:55.860 "raid_level": "raid1", 00:26:55.860 "superblock": true, 00:26:55.860 "num_base_bdevs": 2, 00:26:55.860 "num_base_bdevs_discovered": 1, 00:26:55.860 "num_base_bdevs_operational": 1, 00:26:55.860 "base_bdevs_list": [ 00:26:55.860 { 00:26:55.860 "name": null, 00:26:55.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.860 "is_configured": false, 00:26:55.860 "data_offset": 256, 00:26:55.860 "data_size": 7936 00:26:55.860 }, 00:26:55.860 { 00:26:55.860 "name": "BaseBdev2", 00:26:55.860 "uuid": "b30f3cfa-f8b8-5e91-b6e0-e67cf2a8122c", 00:26:55.860 "is_configured": true, 00:26:55.860 "data_offset": 256, 00:26:55.860 "data_size": 7936 00:26:55.860 } 00:26:55.860 ] 00:26:55.860 }' 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@798 -- # killprocess 824188 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 824188 ']' 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 824188 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 824188 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 824188' 00:26:55.860 killing process with pid 824188 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 824188 00:26:55.860 Received shutdown signal, test time was about 60.000000 seconds 00:26:55.860 00:26:55.860 Latency(us) 00:26:55.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.860 =================================================================================================================== 00:26:55.860 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:55.860 [2024-07-26 13:26:36.312144] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:55.860 [2024-07-26 13:26:36.312234] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:55.860 [2024-07-26 13:26:36.312275] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:55.860 [2024-07-26 13:26:36.312286] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe485c0 name raid_bdev1, state offline 00:26:55.860 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 824188 00:26:55.860 [2024-07-26 13:26:36.336046] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:56.120 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@800 -- # return 0 00:26:56.120 00:26:56.120 real 0m29.843s 00:26:56.120 user 0m46.192s 00:26:56.120 sys 0m4.789s 00:26:56.120 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:56.120 13:26:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:26:56.120 ************************************ 00:26:56.120 END TEST raid_rebuild_test_sb_4k 00:26:56.120 ************************************ 00:26:56.120 13:26:36 bdev_raid -- bdev/bdev_raid.sh@984 -- # base_malloc_params='-m 32' 00:26:56.120 13:26:36 bdev_raid -- bdev/bdev_raid.sh@985 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:26:56.120 13:26:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:56.120 13:26:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:56.120 13:26:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:56.120 ************************************ 00:26:56.120 START TEST raid_state_function_test_sb_md_separate 00:26:56.120 ************************************ 00:26:56.120 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:26:56.120 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=829679 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 829679' 00:26:56.121 Process raid pid: 829679 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 829679 /var/tmp/spdk-raid.sock 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 829679 ']' 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:56.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:56.121 13:26:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:56.381 [2024-07-26 13:26:36.681207] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:26:56.381 [2024-07-26 13:26:36.681265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:56.381 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:56.381 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:56.381 [2024-07-26 13:26:36.815986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.381 [2024-07-26 13:26:36.901948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.640 [2024-07-26 13:26:36.962550] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:56.640 [2024-07-26 13:26:36.962583] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:57.208 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:57.208 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:26:57.208 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:57.467 [2024-07-26 13:26:37.788800] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:57.467 [2024-07-26 13:26:37.788839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:57.467 [2024-07-26 13:26:37.788849] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:57.467 [2024-07-26 13:26:37.788860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:57.467 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:57.467 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:57.467 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:57.468 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:57.468 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:57.468 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:57.468 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:57.468 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:57.468 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:57.468 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:57.468 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.468 13:26:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:57.727 13:26:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:57.727 "name": "Existed_Raid", 00:26:57.727 "uuid": "b4226a3f-2e3b-4279-ac54-76a060c4f4cf", 00:26:57.727 "strip_size_kb": 0, 00:26:57.727 "state": "configuring", 00:26:57.727 "raid_level": "raid1", 00:26:57.727 "superblock": true, 00:26:57.727 "num_base_bdevs": 2, 00:26:57.727 "num_base_bdevs_discovered": 0, 00:26:57.727 "num_base_bdevs_operational": 2, 00:26:57.727 "base_bdevs_list": [ 00:26:57.727 { 00:26:57.727 "name": "BaseBdev1", 00:26:57.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.727 "is_configured": false, 00:26:57.727 "data_offset": 0, 00:26:57.727 "data_size": 0 00:26:57.727 }, 00:26:57.727 { 00:26:57.727 "name": "BaseBdev2", 00:26:57.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.727 "is_configured": false, 00:26:57.727 "data_offset": 0, 00:26:57.727 "data_size": 0 00:26:57.727 } 00:26:57.727 ] 00:26:57.727 }' 00:26:57.727 13:26:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:57.727 13:26:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:26:58.296 13:26:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:58.296 [2024-07-26 13:26:38.767267] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:58.296 [2024-07-26 13:26:38.767296] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2591f20 name Existed_Raid, state configuring 00:26:58.296 13:26:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:26:58.555 [2024-07-26 13:26:38.987859] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:58.555 [2024-07-26 13:26:38.987890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:58.555 [2024-07-26 13:26:38.987899] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:58.555 [2024-07-26 13:26:38.987911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:58.555 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:26:58.814 [2024-07-26 13:26:39.226329] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:58.814 BaseBdev1 00:26:58.814 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:58.814 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:58.814 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:58.814 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:26:58.814 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:58.814 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:58.814 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:59.074 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:59.333 [ 00:26:59.333 { 00:26:59.333 "name": "BaseBdev1", 00:26:59.333 "aliases": [ 00:26:59.333 "2fd0e42b-146a-4339-9cc7-cc62cb41d5c3" 00:26:59.333 ], 00:26:59.333 "product_name": "Malloc disk", 00:26:59.333 "block_size": 4096, 00:26:59.333 "num_blocks": 8192, 00:26:59.333 "uuid": "2fd0e42b-146a-4339-9cc7-cc62cb41d5c3", 00:26:59.333 "md_size": 32, 00:26:59.333 "md_interleave": false, 00:26:59.333 "dif_type": 0, 00:26:59.333 "assigned_rate_limits": { 00:26:59.333 "rw_ios_per_sec": 0, 00:26:59.333 "rw_mbytes_per_sec": 0, 00:26:59.333 "r_mbytes_per_sec": 0, 00:26:59.333 "w_mbytes_per_sec": 0 00:26:59.333 }, 00:26:59.333 "claimed": true, 00:26:59.333 "claim_type": "exclusive_write", 00:26:59.333 "zoned": false, 00:26:59.333 "supported_io_types": { 00:26:59.333 "read": true, 00:26:59.333 "write": true, 00:26:59.333 "unmap": true, 00:26:59.333 "flush": true, 00:26:59.333 "reset": true, 00:26:59.333 "nvme_admin": false, 00:26:59.333 "nvme_io": false, 00:26:59.333 "nvme_io_md": false, 00:26:59.333 "write_zeroes": true, 00:26:59.333 "zcopy": true, 00:26:59.333 "get_zone_info": false, 00:26:59.333 "zone_management": false, 00:26:59.333 "zone_append": false, 00:26:59.333 "compare": false, 00:26:59.333 "compare_and_write": false, 00:26:59.333 "abort": true, 00:26:59.333 "seek_hole": false, 00:26:59.333 "seek_data": false, 00:26:59.333 "copy": true, 00:26:59.333 "nvme_iov_md": false 00:26:59.333 }, 00:26:59.333 "memory_domains": [ 00:26:59.333 { 00:26:59.333 "dma_device_id": "system", 00:26:59.333 "dma_device_type": 1 00:26:59.333 }, 00:26:59.333 { 00:26:59.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.333 "dma_device_type": 2 00:26:59.333 } 00:26:59.333 ], 00:26:59.333 "driver_specific": {} 00:26:59.333 } 00:26:59.333 ] 00:26:59.333 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:26:59.333 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:59.333 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:59.333 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:59.333 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:59.333 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:59.334 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:59.334 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.334 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.334 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.334 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.334 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.334 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:59.593 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.593 "name": "Existed_Raid", 00:26:59.593 "uuid": "44bf5485-c634-4425-9125-48c3f26dfa67", 00:26:59.593 "strip_size_kb": 0, 00:26:59.593 "state": "configuring", 00:26:59.593 "raid_level": "raid1", 00:26:59.593 "superblock": true, 00:26:59.593 "num_base_bdevs": 2, 00:26:59.593 "num_base_bdevs_discovered": 1, 00:26:59.593 "num_base_bdevs_operational": 2, 00:26:59.593 "base_bdevs_list": [ 00:26:59.593 { 00:26:59.593 "name": "BaseBdev1", 00:26:59.593 "uuid": "2fd0e42b-146a-4339-9cc7-cc62cb41d5c3", 00:26:59.593 "is_configured": true, 00:26:59.593 "data_offset": 256, 00:26:59.593 "data_size": 7936 00:26:59.593 }, 00:26:59.593 { 00:26:59.593 "name": "BaseBdev2", 00:26:59.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.593 "is_configured": false, 00:26:59.593 "data_offset": 0, 00:26:59.593 "data_size": 0 00:26:59.593 } 00:26:59.593 ] 00:26:59.593 }' 00:26:59.593 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.593 13:26:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:00.163 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:00.429 [2024-07-26 13:26:40.734308] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:00.429 [2024-07-26 13:26:40.734347] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2591810 name Existed_Raid, state configuring 00:27:00.429 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:00.688 [2024-07-26 13:26:40.962949] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:00.688 [2024-07-26 13:26:40.964280] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:00.688 [2024-07-26 13:26:40.964311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.688 13:26:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:00.688 13:26:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:00.688 "name": "Existed_Raid", 00:27:00.688 "uuid": "5eeab2cd-1844-410b-ba8a-2d7d88792bda", 00:27:00.688 "strip_size_kb": 0, 00:27:00.688 "state": "configuring", 00:27:00.688 "raid_level": "raid1", 00:27:00.688 "superblock": true, 00:27:00.688 "num_base_bdevs": 2, 00:27:00.688 "num_base_bdevs_discovered": 1, 00:27:00.688 "num_base_bdevs_operational": 2, 00:27:00.689 "base_bdevs_list": [ 00:27:00.689 { 00:27:00.689 "name": "BaseBdev1", 00:27:00.689 "uuid": "2fd0e42b-146a-4339-9cc7-cc62cb41d5c3", 00:27:00.689 "is_configured": true, 00:27:00.689 "data_offset": 256, 00:27:00.689 "data_size": 7936 00:27:00.689 }, 00:27:00.689 { 00:27:00.689 "name": "BaseBdev2", 00:27:00.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.689 "is_configured": false, 00:27:00.689 "data_offset": 0, 00:27:00.689 "data_size": 0 00:27:00.689 } 00:27:00.689 ] 00:27:00.689 }' 00:27:00.689 13:26:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:00.689 13:26:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:01.257 13:26:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:27:01.516 [2024-07-26 13:26:41.981363] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:01.516 [2024-07-26 13:26:41.981492] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2593750 00:27:01.516 [2024-07-26 13:26:41.981505] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:01.516 [2024-07-26 13:26:41.981561] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2593190 00:27:01.516 [2024-07-26 13:26:41.981652] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2593750 00:27:01.516 [2024-07-26 13:26:41.981661] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2593750 00:27:01.516 [2024-07-26 13:26:41.981720] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:01.516 BaseBdev2 00:27:01.516 13:26:41 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:01.516 13:26:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:01.516 13:26:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:01.516 13:26:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:27:01.516 13:26:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:01.516 13:26:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:01.516 13:26:41 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:01.774 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:02.032 [ 00:27:02.032 { 00:27:02.032 "name": "BaseBdev2", 00:27:02.032 "aliases": [ 00:27:02.032 "a986e758-035d-42d8-8ae0-747c5671feee" 00:27:02.032 ], 00:27:02.032 "product_name": "Malloc disk", 00:27:02.032 "block_size": 4096, 00:27:02.032 "num_blocks": 8192, 00:27:02.032 "uuid": "a986e758-035d-42d8-8ae0-747c5671feee", 00:27:02.032 "md_size": 32, 00:27:02.032 "md_interleave": false, 00:27:02.032 "dif_type": 0, 00:27:02.032 "assigned_rate_limits": { 00:27:02.032 "rw_ios_per_sec": 0, 00:27:02.032 "rw_mbytes_per_sec": 0, 00:27:02.032 "r_mbytes_per_sec": 0, 00:27:02.032 "w_mbytes_per_sec": 0 00:27:02.032 }, 00:27:02.032 "claimed": true, 00:27:02.032 "claim_type": "exclusive_write", 00:27:02.032 "zoned": false, 00:27:02.032 "supported_io_types": { 00:27:02.032 "read": true, 00:27:02.032 "write": true, 00:27:02.032 "unmap": true, 00:27:02.032 "flush": true, 00:27:02.032 "reset": true, 00:27:02.032 "nvme_admin": false, 00:27:02.032 "nvme_io": false, 00:27:02.032 "nvme_io_md": false, 00:27:02.032 "write_zeroes": true, 00:27:02.032 "zcopy": true, 00:27:02.032 "get_zone_info": false, 00:27:02.032 "zone_management": false, 00:27:02.032 "zone_append": false, 00:27:02.032 "compare": false, 00:27:02.032 "compare_and_write": false, 00:27:02.032 "abort": true, 00:27:02.032 "seek_hole": false, 00:27:02.032 "seek_data": false, 00:27:02.032 "copy": true, 00:27:02.032 "nvme_iov_md": false 00:27:02.032 }, 00:27:02.032 "memory_domains": [ 00:27:02.032 { 00:27:02.032 "dma_device_id": "system", 00:27:02.032 "dma_device_type": 1 00:27:02.032 }, 00:27:02.032 { 00:27:02.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:02.032 "dma_device_type": 2 00:27:02.032 } 00:27:02.032 ], 00:27:02.032 "driver_specific": {} 00:27:02.032 } 00:27:02.032 ] 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.032 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:02.292 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.292 "name": "Existed_Raid", 00:27:02.292 "uuid": "5eeab2cd-1844-410b-ba8a-2d7d88792bda", 00:27:02.292 "strip_size_kb": 0, 00:27:02.292 "state": "online", 00:27:02.292 "raid_level": "raid1", 00:27:02.292 "superblock": true, 00:27:02.292 "num_base_bdevs": 2, 00:27:02.292 "num_base_bdevs_discovered": 2, 00:27:02.292 "num_base_bdevs_operational": 2, 00:27:02.292 "base_bdevs_list": [ 00:27:02.292 { 00:27:02.292 "name": "BaseBdev1", 00:27:02.292 "uuid": "2fd0e42b-146a-4339-9cc7-cc62cb41d5c3", 00:27:02.292 "is_configured": true, 00:27:02.292 "data_offset": 256, 00:27:02.292 "data_size": 7936 00:27:02.292 }, 00:27:02.292 { 00:27:02.292 "name": "BaseBdev2", 00:27:02.292 "uuid": "a986e758-035d-42d8-8ae0-747c5671feee", 00:27:02.292 "is_configured": true, 00:27:02.292 "data_offset": 256, 00:27:02.292 "data_size": 7936 00:27:02.292 } 00:27:02.292 ] 00:27:02.292 }' 00:27:02.292 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.292 13:26:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:02.860 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:02.860 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:02.860 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:02.860 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:02.860 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:02.860 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:02.860 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:02.860 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:03.119 [2024-07-26 13:26:43.421452] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:03.119 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:03.119 "name": "Existed_Raid", 00:27:03.119 "aliases": [ 00:27:03.119 "5eeab2cd-1844-410b-ba8a-2d7d88792bda" 00:27:03.119 ], 00:27:03.119 "product_name": "Raid Volume", 00:27:03.119 "block_size": 4096, 00:27:03.119 "num_blocks": 7936, 00:27:03.119 "uuid": "5eeab2cd-1844-410b-ba8a-2d7d88792bda", 00:27:03.119 "md_size": 32, 00:27:03.119 "md_interleave": false, 00:27:03.119 "dif_type": 0, 00:27:03.119 "assigned_rate_limits": { 00:27:03.119 "rw_ios_per_sec": 0, 00:27:03.119 "rw_mbytes_per_sec": 0, 00:27:03.119 "r_mbytes_per_sec": 0, 00:27:03.119 "w_mbytes_per_sec": 0 00:27:03.119 }, 00:27:03.119 "claimed": false, 00:27:03.119 "zoned": false, 00:27:03.119 "supported_io_types": { 00:27:03.119 "read": true, 00:27:03.119 "write": true, 00:27:03.119 "unmap": false, 00:27:03.119 "flush": false, 00:27:03.119 "reset": true, 00:27:03.119 "nvme_admin": false, 00:27:03.119 "nvme_io": false, 00:27:03.119 "nvme_io_md": false, 00:27:03.119 "write_zeroes": true, 00:27:03.119 "zcopy": false, 00:27:03.119 "get_zone_info": false, 00:27:03.119 "zone_management": false, 00:27:03.119 "zone_append": false, 00:27:03.119 "compare": false, 00:27:03.119 "compare_and_write": false, 00:27:03.119 "abort": false, 00:27:03.119 "seek_hole": false, 00:27:03.119 "seek_data": false, 00:27:03.119 "copy": false, 00:27:03.119 "nvme_iov_md": false 00:27:03.119 }, 00:27:03.119 "memory_domains": [ 00:27:03.119 { 00:27:03.119 "dma_device_id": "system", 00:27:03.119 "dma_device_type": 1 00:27:03.119 }, 00:27:03.119 { 00:27:03.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.119 "dma_device_type": 2 00:27:03.119 }, 00:27:03.119 { 00:27:03.119 "dma_device_id": "system", 00:27:03.119 "dma_device_type": 1 00:27:03.119 }, 00:27:03.119 { 00:27:03.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.119 "dma_device_type": 2 00:27:03.119 } 00:27:03.119 ], 00:27:03.119 "driver_specific": { 00:27:03.119 "raid": { 00:27:03.119 "uuid": "5eeab2cd-1844-410b-ba8a-2d7d88792bda", 00:27:03.119 "strip_size_kb": 0, 00:27:03.119 "state": "online", 00:27:03.119 "raid_level": "raid1", 00:27:03.119 "superblock": true, 00:27:03.119 "num_base_bdevs": 2, 00:27:03.119 "num_base_bdevs_discovered": 2, 00:27:03.119 "num_base_bdevs_operational": 2, 00:27:03.119 "base_bdevs_list": [ 00:27:03.119 { 00:27:03.119 "name": "BaseBdev1", 00:27:03.119 "uuid": "2fd0e42b-146a-4339-9cc7-cc62cb41d5c3", 00:27:03.119 "is_configured": true, 00:27:03.119 "data_offset": 256, 00:27:03.119 "data_size": 7936 00:27:03.119 }, 00:27:03.119 { 00:27:03.119 "name": "BaseBdev2", 00:27:03.119 "uuid": "a986e758-035d-42d8-8ae0-747c5671feee", 00:27:03.119 "is_configured": true, 00:27:03.119 "data_offset": 256, 00:27:03.119 "data_size": 7936 00:27:03.119 } 00:27:03.119 ] 00:27:03.119 } 00:27:03.119 } 00:27:03.119 }' 00:27:03.119 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:03.119 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:03.119 BaseBdev2' 00:27:03.119 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:03.119 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:03.119 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:03.378 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:03.378 "name": "BaseBdev1", 00:27:03.378 "aliases": [ 00:27:03.378 "2fd0e42b-146a-4339-9cc7-cc62cb41d5c3" 00:27:03.378 ], 00:27:03.378 "product_name": "Malloc disk", 00:27:03.378 "block_size": 4096, 00:27:03.378 "num_blocks": 8192, 00:27:03.378 "uuid": "2fd0e42b-146a-4339-9cc7-cc62cb41d5c3", 00:27:03.378 "md_size": 32, 00:27:03.378 "md_interleave": false, 00:27:03.378 "dif_type": 0, 00:27:03.378 "assigned_rate_limits": { 00:27:03.378 "rw_ios_per_sec": 0, 00:27:03.378 "rw_mbytes_per_sec": 0, 00:27:03.378 "r_mbytes_per_sec": 0, 00:27:03.378 "w_mbytes_per_sec": 0 00:27:03.378 }, 00:27:03.378 "claimed": true, 00:27:03.378 "claim_type": "exclusive_write", 00:27:03.378 "zoned": false, 00:27:03.378 "supported_io_types": { 00:27:03.378 "read": true, 00:27:03.378 "write": true, 00:27:03.378 "unmap": true, 00:27:03.378 "flush": true, 00:27:03.378 "reset": true, 00:27:03.378 "nvme_admin": false, 00:27:03.378 "nvme_io": false, 00:27:03.378 "nvme_io_md": false, 00:27:03.378 "write_zeroes": true, 00:27:03.378 "zcopy": true, 00:27:03.378 "get_zone_info": false, 00:27:03.378 "zone_management": false, 00:27:03.378 "zone_append": false, 00:27:03.378 "compare": false, 00:27:03.378 "compare_and_write": false, 00:27:03.378 "abort": true, 00:27:03.378 "seek_hole": false, 00:27:03.378 "seek_data": false, 00:27:03.378 "copy": true, 00:27:03.378 "nvme_iov_md": false 00:27:03.378 }, 00:27:03.378 "memory_domains": [ 00:27:03.378 { 00:27:03.378 "dma_device_id": "system", 00:27:03.378 "dma_device_type": 1 00:27:03.378 }, 00:27:03.378 { 00:27:03.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.378 "dma_device_type": 2 00:27:03.378 } 00:27:03.378 ], 00:27:03.378 "driver_specific": {} 00:27:03.378 }' 00:27:03.378 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.378 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.378 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:03.378 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:03.378 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:03.378 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:03.636 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:03.636 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:03.636 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:03.636 13:26:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:03.636 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:03.636 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:03.636 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:03.636 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:03.636 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:03.894 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:03.894 "name": "BaseBdev2", 00:27:03.894 "aliases": [ 00:27:03.894 "a986e758-035d-42d8-8ae0-747c5671feee" 00:27:03.894 ], 00:27:03.894 "product_name": "Malloc disk", 00:27:03.894 "block_size": 4096, 00:27:03.894 "num_blocks": 8192, 00:27:03.894 "uuid": "a986e758-035d-42d8-8ae0-747c5671feee", 00:27:03.894 "md_size": 32, 00:27:03.894 "md_interleave": false, 00:27:03.894 "dif_type": 0, 00:27:03.894 "assigned_rate_limits": { 00:27:03.894 "rw_ios_per_sec": 0, 00:27:03.894 "rw_mbytes_per_sec": 0, 00:27:03.894 "r_mbytes_per_sec": 0, 00:27:03.894 "w_mbytes_per_sec": 0 00:27:03.894 }, 00:27:03.894 "claimed": true, 00:27:03.894 "claim_type": "exclusive_write", 00:27:03.894 "zoned": false, 00:27:03.894 "supported_io_types": { 00:27:03.894 "read": true, 00:27:03.894 "write": true, 00:27:03.894 "unmap": true, 00:27:03.894 "flush": true, 00:27:03.894 "reset": true, 00:27:03.894 "nvme_admin": false, 00:27:03.894 "nvme_io": false, 00:27:03.894 "nvme_io_md": false, 00:27:03.894 "write_zeroes": true, 00:27:03.894 "zcopy": true, 00:27:03.894 "get_zone_info": false, 00:27:03.894 "zone_management": false, 00:27:03.894 "zone_append": false, 00:27:03.894 "compare": false, 00:27:03.894 "compare_and_write": false, 00:27:03.894 "abort": true, 00:27:03.894 "seek_hole": false, 00:27:03.894 "seek_data": false, 00:27:03.894 "copy": true, 00:27:03.894 "nvme_iov_md": false 00:27:03.894 }, 00:27:03.894 "memory_domains": [ 00:27:03.894 { 00:27:03.894 "dma_device_id": "system", 00:27:03.894 "dma_device_type": 1 00:27:03.894 }, 00:27:03.894 { 00:27:03.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:03.894 "dma_device_type": 2 00:27:03.894 } 00:27:03.894 ], 00:27:03.894 "driver_specific": {} 00:27:03.894 }' 00:27:03.894 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.894 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:03.894 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:03.894 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.153 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:04.153 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:04.153 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.153 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:04.153 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:04.153 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.153 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:04.153 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:04.153 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:04.412 [2024-07-26 13:26:44.828936] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.412 13:26:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:04.671 13:26:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:04.671 "name": "Existed_Raid", 00:27:04.671 "uuid": "5eeab2cd-1844-410b-ba8a-2d7d88792bda", 00:27:04.671 "strip_size_kb": 0, 00:27:04.671 "state": "online", 00:27:04.671 "raid_level": "raid1", 00:27:04.671 "superblock": true, 00:27:04.671 "num_base_bdevs": 2, 00:27:04.671 "num_base_bdevs_discovered": 1, 00:27:04.671 "num_base_bdevs_operational": 1, 00:27:04.671 "base_bdevs_list": [ 00:27:04.671 { 00:27:04.671 "name": null, 00:27:04.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.671 "is_configured": false, 00:27:04.671 "data_offset": 256, 00:27:04.671 "data_size": 7936 00:27:04.671 }, 00:27:04.671 { 00:27:04.671 "name": "BaseBdev2", 00:27:04.671 "uuid": "a986e758-035d-42d8-8ae0-747c5671feee", 00:27:04.671 "is_configured": true, 00:27:04.671 "data_offset": 256, 00:27:04.671 "data_size": 7936 00:27:04.671 } 00:27:04.671 ] 00:27:04.671 }' 00:27:04.671 13:26:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:04.671 13:26:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:05.238 13:26:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:27:05.238 13:26:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:05.238 13:26:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.238 13:26:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:27:05.497 13:26:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:27:05.498 13:26:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:05.498 13:26:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:27:05.757 [2024-07-26 13:26:46.050173] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:05.757 [2024-07-26 13:26:46.050250] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:05.757 [2024-07-26 13:26:46.061278] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:05.757 [2024-07-26 13:26:46.061308] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:05.757 [2024-07-26 13:26:46.061319] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2593750 name Existed_Raid, state offline 00:27:05.757 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:27:05.757 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:27:05.757 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.757 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 829679 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 829679 ']' 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 829679 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 829679 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 829679' 00:27:06.077 killing process with pid 829679 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 829679 00:27:06.077 [2024-07-26 13:26:46.349808] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 829679 00:27:06.077 [2024-07-26 13:26:46.350646] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:27:06.077 00:27:06.077 real 0m9.927s 00:27:06.077 user 0m17.610s 00:27:06.077 sys 0m1.875s 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:06.077 13:26:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:06.077 ************************************ 00:27:06.077 END TEST raid_state_function_test_sb_md_separate 00:27:06.077 ************************************ 00:27:06.077 13:26:46 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:27:06.077 13:26:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:06.077 13:26:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:06.077 13:26:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:06.337 ************************************ 00:27:06.337 START TEST raid_superblock_test_md_separate 00:27:06.337 ************************************ 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@414 -- # local strip_size 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@427 -- # raid_pid=831563 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@428 -- # waitforlisten 831563 /var/tmp/spdk-raid.sock 00:27:06.337 13:26:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:27:06.338 13:26:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 831563 ']' 00:27:06.338 13:26:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:06.338 13:26:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.338 13:26:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:06.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:06.338 13:26:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.338 13:26:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:06.338 [2024-07-26 13:26:46.671111] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:27:06.338 [2024-07-26 13:26:46.671176] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831563 ] 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:06.338 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:06.338 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:06.338 [2024-07-26 13:26:46.804026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.597 [2024-07-26 13:26:46.891579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.597 [2024-07-26 13:26:46.951771] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:06.597 [2024-07-26 13:26:46.951810] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:07.165 13:26:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:07.165 13:26:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:27:07.165 13:26:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:27:07.165 13:26:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:27:07.165 13:26:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:27:07.165 13:26:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:27:07.165 13:26:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:07.165 13:26:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:07.165 13:26:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:27:07.165 13:26:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:07.165 13:26:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:27:07.424 malloc1 00:27:07.424 13:26:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:07.684 [2024-07-26 13:26:47.989034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:07.684 [2024-07-26 13:26:47.989076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.684 [2024-07-26 13:26:47.989095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd77cc0 00:27:07.684 [2024-07-26 13:26:47.989107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.684 [2024-07-26 13:26:47.990542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.684 [2024-07-26 13:26:47.990568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:07.684 pt1 00:27:07.684 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:27:07.684 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:27:07.684 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:27:07.684 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:27:07.684 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:07.684 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:07.684 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:27:07.684 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:07.684 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:27:07.943 malloc2 00:27:07.943 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:07.943 [2024-07-26 13:26:48.451481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:07.943 [2024-07-26 13:26:48.451522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.943 [2024-07-26 13:26:48.451539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe8ab80 00:27:07.943 [2024-07-26 13:26:48.451552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.943 [2024-07-26 13:26:48.452786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.943 [2024-07-26 13:26:48.452814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:07.943 pt2 00:27:07.943 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:27:07.943 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:27:07.943 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:27:08.240 [2024-07-26 13:26:48.676095] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:08.240 [2024-07-26 13:26:48.677267] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:08.240 [2024-07-26 13:26:48.677392] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xd783c0 00:27:08.240 [2024-07-26 13:26:48.677405] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:08.240 [2024-07-26 13:26:48.677474] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcf78c0 00:27:08.240 [2024-07-26 13:26:48.677582] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd783c0 00:27:08.240 [2024-07-26 13:26:48.677592] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd783c0 00:27:08.240 [2024-07-26 13:26:48.677669] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:08.240 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:08.240 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:08.240 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:08.240 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:08.240 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:08.240 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:08.240 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:08.240 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:08.240 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:08.240 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:08.240 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.240 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.528 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:08.528 "name": "raid_bdev1", 00:27:08.528 "uuid": "5d72441a-32aa-4d3f-bd99-68e52ac65d1f", 00:27:08.528 "strip_size_kb": 0, 00:27:08.529 "state": "online", 00:27:08.529 "raid_level": "raid1", 00:27:08.529 "superblock": true, 00:27:08.529 "num_base_bdevs": 2, 00:27:08.529 "num_base_bdevs_discovered": 2, 00:27:08.529 "num_base_bdevs_operational": 2, 00:27:08.529 "base_bdevs_list": [ 00:27:08.529 { 00:27:08.529 "name": "pt1", 00:27:08.529 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:08.529 "is_configured": true, 00:27:08.529 "data_offset": 256, 00:27:08.529 "data_size": 7936 00:27:08.529 }, 00:27:08.529 { 00:27:08.529 "name": "pt2", 00:27:08.529 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:08.529 "is_configured": true, 00:27:08.529 "data_offset": 256, 00:27:08.529 "data_size": 7936 00:27:08.529 } 00:27:08.529 ] 00:27:08.529 }' 00:27:08.529 13:26:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:08.529 13:26:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:09.096 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:27:09.096 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:09.096 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:09.096 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:09.096 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:09.096 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:09.096 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:09.097 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:09.356 [2024-07-26 13:26:49.690966] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:09.356 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:09.356 "name": "raid_bdev1", 00:27:09.356 "aliases": [ 00:27:09.356 "5d72441a-32aa-4d3f-bd99-68e52ac65d1f" 00:27:09.356 ], 00:27:09.356 "product_name": "Raid Volume", 00:27:09.356 "block_size": 4096, 00:27:09.356 "num_blocks": 7936, 00:27:09.356 "uuid": "5d72441a-32aa-4d3f-bd99-68e52ac65d1f", 00:27:09.356 "md_size": 32, 00:27:09.356 "md_interleave": false, 00:27:09.356 "dif_type": 0, 00:27:09.356 "assigned_rate_limits": { 00:27:09.356 "rw_ios_per_sec": 0, 00:27:09.356 "rw_mbytes_per_sec": 0, 00:27:09.356 "r_mbytes_per_sec": 0, 00:27:09.356 "w_mbytes_per_sec": 0 00:27:09.356 }, 00:27:09.356 "claimed": false, 00:27:09.356 "zoned": false, 00:27:09.356 "supported_io_types": { 00:27:09.356 "read": true, 00:27:09.356 "write": true, 00:27:09.356 "unmap": false, 00:27:09.356 "flush": false, 00:27:09.356 "reset": true, 00:27:09.356 "nvme_admin": false, 00:27:09.356 "nvme_io": false, 00:27:09.356 "nvme_io_md": false, 00:27:09.356 "write_zeroes": true, 00:27:09.356 "zcopy": false, 00:27:09.356 "get_zone_info": false, 00:27:09.356 "zone_management": false, 00:27:09.356 "zone_append": false, 00:27:09.356 "compare": false, 00:27:09.356 "compare_and_write": false, 00:27:09.356 "abort": false, 00:27:09.356 "seek_hole": false, 00:27:09.356 "seek_data": false, 00:27:09.356 "copy": false, 00:27:09.356 "nvme_iov_md": false 00:27:09.356 }, 00:27:09.356 "memory_domains": [ 00:27:09.356 { 00:27:09.356 "dma_device_id": "system", 00:27:09.356 "dma_device_type": 1 00:27:09.356 }, 00:27:09.356 { 00:27:09.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.356 "dma_device_type": 2 00:27:09.356 }, 00:27:09.356 { 00:27:09.356 "dma_device_id": "system", 00:27:09.356 "dma_device_type": 1 00:27:09.356 }, 00:27:09.356 { 00:27:09.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.356 "dma_device_type": 2 00:27:09.356 } 00:27:09.356 ], 00:27:09.356 "driver_specific": { 00:27:09.356 "raid": { 00:27:09.356 "uuid": "5d72441a-32aa-4d3f-bd99-68e52ac65d1f", 00:27:09.356 "strip_size_kb": 0, 00:27:09.356 "state": "online", 00:27:09.356 "raid_level": "raid1", 00:27:09.356 "superblock": true, 00:27:09.356 "num_base_bdevs": 2, 00:27:09.356 "num_base_bdevs_discovered": 2, 00:27:09.356 "num_base_bdevs_operational": 2, 00:27:09.356 "base_bdevs_list": [ 00:27:09.356 { 00:27:09.356 "name": "pt1", 00:27:09.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:09.356 "is_configured": true, 00:27:09.356 "data_offset": 256, 00:27:09.356 "data_size": 7936 00:27:09.356 }, 00:27:09.356 { 00:27:09.356 "name": "pt2", 00:27:09.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:09.356 "is_configured": true, 00:27:09.356 "data_offset": 256, 00:27:09.356 "data_size": 7936 00:27:09.356 } 00:27:09.356 ] 00:27:09.356 } 00:27:09.356 } 00:27:09.356 }' 00:27:09.356 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:09.356 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:09.356 pt2' 00:27:09.356 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:09.356 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:09.356 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:09.616 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:09.616 "name": "pt1", 00:27:09.616 "aliases": [ 00:27:09.616 "00000000-0000-0000-0000-000000000001" 00:27:09.616 ], 00:27:09.616 "product_name": "passthru", 00:27:09.616 "block_size": 4096, 00:27:09.616 "num_blocks": 8192, 00:27:09.616 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:09.616 "md_size": 32, 00:27:09.616 "md_interleave": false, 00:27:09.616 "dif_type": 0, 00:27:09.616 "assigned_rate_limits": { 00:27:09.616 "rw_ios_per_sec": 0, 00:27:09.616 "rw_mbytes_per_sec": 0, 00:27:09.616 "r_mbytes_per_sec": 0, 00:27:09.616 "w_mbytes_per_sec": 0 00:27:09.616 }, 00:27:09.616 "claimed": true, 00:27:09.616 "claim_type": "exclusive_write", 00:27:09.616 "zoned": false, 00:27:09.616 "supported_io_types": { 00:27:09.616 "read": true, 00:27:09.616 "write": true, 00:27:09.616 "unmap": true, 00:27:09.616 "flush": true, 00:27:09.616 "reset": true, 00:27:09.616 "nvme_admin": false, 00:27:09.616 "nvme_io": false, 00:27:09.616 "nvme_io_md": false, 00:27:09.616 "write_zeroes": true, 00:27:09.616 "zcopy": true, 00:27:09.616 "get_zone_info": false, 00:27:09.616 "zone_management": false, 00:27:09.616 "zone_append": false, 00:27:09.616 "compare": false, 00:27:09.616 "compare_and_write": false, 00:27:09.616 "abort": true, 00:27:09.616 "seek_hole": false, 00:27:09.616 "seek_data": false, 00:27:09.616 "copy": true, 00:27:09.616 "nvme_iov_md": false 00:27:09.616 }, 00:27:09.616 "memory_domains": [ 00:27:09.616 { 00:27:09.616 "dma_device_id": "system", 00:27:09.616 "dma_device_type": 1 00:27:09.616 }, 00:27:09.616 { 00:27:09.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:09.616 "dma_device_type": 2 00:27:09.616 } 00:27:09.616 ], 00:27:09.616 "driver_specific": { 00:27:09.616 "passthru": { 00:27:09.616 "name": "pt1", 00:27:09.616 "base_bdev_name": "malloc1" 00:27:09.616 } 00:27:09.616 } 00:27:09.616 }' 00:27:09.616 13:26:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.616 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:09.616 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:09.616 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.616 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:09.875 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:09.875 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.875 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:09.875 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:09.875 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.875 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:09.875 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:09.875 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:09.875 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:09.875 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:10.134 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:10.134 "name": "pt2", 00:27:10.134 "aliases": [ 00:27:10.134 "00000000-0000-0000-0000-000000000002" 00:27:10.134 ], 00:27:10.134 "product_name": "passthru", 00:27:10.134 "block_size": 4096, 00:27:10.134 "num_blocks": 8192, 00:27:10.134 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:10.134 "md_size": 32, 00:27:10.134 "md_interleave": false, 00:27:10.134 "dif_type": 0, 00:27:10.134 "assigned_rate_limits": { 00:27:10.134 "rw_ios_per_sec": 0, 00:27:10.134 "rw_mbytes_per_sec": 0, 00:27:10.134 "r_mbytes_per_sec": 0, 00:27:10.134 "w_mbytes_per_sec": 0 00:27:10.134 }, 00:27:10.134 "claimed": true, 00:27:10.134 "claim_type": "exclusive_write", 00:27:10.134 "zoned": false, 00:27:10.134 "supported_io_types": { 00:27:10.134 "read": true, 00:27:10.134 "write": true, 00:27:10.134 "unmap": true, 00:27:10.134 "flush": true, 00:27:10.134 "reset": true, 00:27:10.134 "nvme_admin": false, 00:27:10.134 "nvme_io": false, 00:27:10.134 "nvme_io_md": false, 00:27:10.134 "write_zeroes": true, 00:27:10.135 "zcopy": true, 00:27:10.135 "get_zone_info": false, 00:27:10.135 "zone_management": false, 00:27:10.135 "zone_append": false, 00:27:10.135 "compare": false, 00:27:10.135 "compare_and_write": false, 00:27:10.135 "abort": true, 00:27:10.135 "seek_hole": false, 00:27:10.135 "seek_data": false, 00:27:10.135 "copy": true, 00:27:10.135 "nvme_iov_md": false 00:27:10.135 }, 00:27:10.135 "memory_domains": [ 00:27:10.135 { 00:27:10.135 "dma_device_id": "system", 00:27:10.135 "dma_device_type": 1 00:27:10.135 }, 00:27:10.135 { 00:27:10.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:10.135 "dma_device_type": 2 00:27:10.135 } 00:27:10.135 ], 00:27:10.135 "driver_specific": { 00:27:10.135 "passthru": { 00:27:10.135 "name": "pt2", 00:27:10.135 "base_bdev_name": "malloc2" 00:27:10.135 } 00:27:10.135 } 00:27:10.135 }' 00:27:10.135 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:10.135 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:10.135 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:10.135 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:10.394 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:10.394 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:10.394 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:10.394 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:10.394 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:10.394 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:10.394 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:10.394 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:10.394 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:10.394 13:26:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:27:10.962 [2024-07-26 13:26:51.387446] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:10.962 13:26:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=5d72441a-32aa-4d3f-bd99-68e52ac65d1f 00:27:10.962 13:26:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' -z 5d72441a-32aa-4d3f-bd99-68e52ac65d1f ']' 00:27:10.962 13:26:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:11.221 [2024-07-26 13:26:51.627830] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:11.221 [2024-07-26 13:26:51.627850] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:11.221 [2024-07-26 13:26:51.627902] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:11.221 [2024-07-26 13:26:51.627950] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:11.221 [2024-07-26 13:26:51.627961] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd783c0 name raid_bdev1, state offline 00:27:11.221 13:26:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.221 13:26:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:27:11.480 13:26:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:27:11.480 13:26:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:27:11.480 13:26:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:27:11.480 13:26:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:11.739 13:26:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:27:11.739 13:26:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:11.997 13:26:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:27:11.997 13:26:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:12.257 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:27:12.257 [2024-07-26 13:26:52.762776] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:12.257 [2024-07-26 13:26:52.764026] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:12.257 [2024-07-26 13:26:52.764078] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:12.257 [2024-07-26 13:26:52.764117] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:12.257 [2024-07-26 13:26:52.764135] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:12.257 [2024-07-26 13:26:52.764155] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe8c2b0 name raid_bdev1, state configuring 00:27:12.257 request: 00:27:12.257 { 00:27:12.257 "name": "raid_bdev1", 00:27:12.257 "raid_level": "raid1", 00:27:12.257 "base_bdevs": [ 00:27:12.257 "malloc1", 00:27:12.257 "malloc2" 00:27:12.257 ], 00:27:12.257 "superblock": false, 00:27:12.257 "method": "bdev_raid_create", 00:27:12.257 "req_id": 1 00:27:12.257 } 00:27:12.257 Got JSON-RPC error response 00:27:12.257 response: 00:27:12.257 { 00:27:12.257 "code": -17, 00:27:12.257 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:12.257 } 00:27:12.517 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:27:12.517 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:12.517 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:12.517 13:26:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:12.517 13:26:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.517 13:26:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:27:12.517 13:26:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:27:12.517 13:26:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:27:12.517 13:26:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:12.776 [2024-07-26 13:26:53.179821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:12.776 [2024-07-26 13:26:53.179859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.776 [2024-07-26 13:26:53.179876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe8adb0 00:27:12.776 [2024-07-26 13:26:53.179887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.776 [2024-07-26 13:26:53.181120] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.776 [2024-07-26 13:26:53.181159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:12.776 [2024-07-26 13:26:53.181200] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:12.776 [2024-07-26 13:26:53.181222] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:12.776 pt1 00:27:12.776 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:12.776 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:12.776 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:12.776 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:12.776 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:12.776 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:12.776 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:12.776 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:12.776 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:12.776 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:12.776 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.776 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.034 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:13.034 "name": "raid_bdev1", 00:27:13.034 "uuid": "5d72441a-32aa-4d3f-bd99-68e52ac65d1f", 00:27:13.034 "strip_size_kb": 0, 00:27:13.034 "state": "configuring", 00:27:13.034 "raid_level": "raid1", 00:27:13.034 "superblock": true, 00:27:13.034 "num_base_bdevs": 2, 00:27:13.034 "num_base_bdevs_discovered": 1, 00:27:13.034 "num_base_bdevs_operational": 2, 00:27:13.034 "base_bdevs_list": [ 00:27:13.034 { 00:27:13.034 "name": "pt1", 00:27:13.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:13.034 "is_configured": true, 00:27:13.034 "data_offset": 256, 00:27:13.034 "data_size": 7936 00:27:13.034 }, 00:27:13.034 { 00:27:13.034 "name": null, 00:27:13.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:13.034 "is_configured": false, 00:27:13.034 "data_offset": 256, 00:27:13.034 "data_size": 7936 00:27:13.034 } 00:27:13.034 ] 00:27:13.034 }' 00:27:13.034 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:13.034 13:26:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:13.601 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:27:13.601 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:27:13.601 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:27:13.601 13:26:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:13.859 [2024-07-26 13:26:54.206522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:13.859 [2024-07-26 13:26:54.206564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.859 [2024-07-26 13:26:54.206583] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe8d410 00:27:13.859 [2024-07-26 13:26:54.206595] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.859 [2024-07-26 13:26:54.206770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.859 [2024-07-26 13:26:54.206784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:13.859 [2024-07-26 13:26:54.206824] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:13.859 [2024-07-26 13:26:54.206841] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:13.859 [2024-07-26 13:26:54.206924] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xe8cf10 00:27:13.859 [2024-07-26 13:26:54.206933] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:13.859 [2024-07-26 13:26:54.206982] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcf6460 00:27:13.859 [2024-07-26 13:26:54.207075] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe8cf10 00:27:13.859 [2024-07-26 13:26:54.207083] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe8cf10 00:27:13.859 [2024-07-26 13:26:54.207156] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:13.859 pt2 00:27:13.859 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:27:13.859 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:27:13.859 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:13.859 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:13.859 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:13.859 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:13.859 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:13.859 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:13.859 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.860 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.860 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.860 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.860 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.860 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:14.118 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:14.118 "name": "raid_bdev1", 00:27:14.118 "uuid": "5d72441a-32aa-4d3f-bd99-68e52ac65d1f", 00:27:14.118 "strip_size_kb": 0, 00:27:14.118 "state": "online", 00:27:14.118 "raid_level": "raid1", 00:27:14.118 "superblock": true, 00:27:14.118 "num_base_bdevs": 2, 00:27:14.118 "num_base_bdevs_discovered": 2, 00:27:14.118 "num_base_bdevs_operational": 2, 00:27:14.118 "base_bdevs_list": [ 00:27:14.118 { 00:27:14.118 "name": "pt1", 00:27:14.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:14.118 "is_configured": true, 00:27:14.118 "data_offset": 256, 00:27:14.118 "data_size": 7936 00:27:14.118 }, 00:27:14.118 { 00:27:14.118 "name": "pt2", 00:27:14.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:14.118 "is_configured": true, 00:27:14.118 "data_offset": 256, 00:27:14.118 "data_size": 7936 00:27:14.118 } 00:27:14.118 ] 00:27:14.118 }' 00:27:14.118 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:14.118 13:26:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:14.686 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:27:14.686 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:27:14.686 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:14.686 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:14.686 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:14.686 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:27:14.686 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:14.686 13:26:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:14.686 [2024-07-26 13:26:55.141236] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:14.686 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:14.686 "name": "raid_bdev1", 00:27:14.686 "aliases": [ 00:27:14.686 "5d72441a-32aa-4d3f-bd99-68e52ac65d1f" 00:27:14.686 ], 00:27:14.686 "product_name": "Raid Volume", 00:27:14.686 "block_size": 4096, 00:27:14.686 "num_blocks": 7936, 00:27:14.686 "uuid": "5d72441a-32aa-4d3f-bd99-68e52ac65d1f", 00:27:14.686 "md_size": 32, 00:27:14.686 "md_interleave": false, 00:27:14.686 "dif_type": 0, 00:27:14.686 "assigned_rate_limits": { 00:27:14.686 "rw_ios_per_sec": 0, 00:27:14.686 "rw_mbytes_per_sec": 0, 00:27:14.686 "r_mbytes_per_sec": 0, 00:27:14.686 "w_mbytes_per_sec": 0 00:27:14.686 }, 00:27:14.686 "claimed": false, 00:27:14.686 "zoned": false, 00:27:14.686 "supported_io_types": { 00:27:14.686 "read": true, 00:27:14.686 "write": true, 00:27:14.686 "unmap": false, 00:27:14.686 "flush": false, 00:27:14.686 "reset": true, 00:27:14.686 "nvme_admin": false, 00:27:14.686 "nvme_io": false, 00:27:14.686 "nvme_io_md": false, 00:27:14.686 "write_zeroes": true, 00:27:14.686 "zcopy": false, 00:27:14.686 "get_zone_info": false, 00:27:14.686 "zone_management": false, 00:27:14.686 "zone_append": false, 00:27:14.686 "compare": false, 00:27:14.686 "compare_and_write": false, 00:27:14.686 "abort": false, 00:27:14.686 "seek_hole": false, 00:27:14.686 "seek_data": false, 00:27:14.686 "copy": false, 00:27:14.686 "nvme_iov_md": false 00:27:14.686 }, 00:27:14.686 "memory_domains": [ 00:27:14.686 { 00:27:14.686 "dma_device_id": "system", 00:27:14.686 "dma_device_type": 1 00:27:14.686 }, 00:27:14.686 { 00:27:14.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.686 "dma_device_type": 2 00:27:14.686 }, 00:27:14.686 { 00:27:14.686 "dma_device_id": "system", 00:27:14.686 "dma_device_type": 1 00:27:14.686 }, 00:27:14.686 { 00:27:14.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.686 "dma_device_type": 2 00:27:14.686 } 00:27:14.686 ], 00:27:14.686 "driver_specific": { 00:27:14.686 "raid": { 00:27:14.686 "uuid": "5d72441a-32aa-4d3f-bd99-68e52ac65d1f", 00:27:14.686 "strip_size_kb": 0, 00:27:14.686 "state": "online", 00:27:14.686 "raid_level": "raid1", 00:27:14.686 "superblock": true, 00:27:14.686 "num_base_bdevs": 2, 00:27:14.686 "num_base_bdevs_discovered": 2, 00:27:14.686 "num_base_bdevs_operational": 2, 00:27:14.686 "base_bdevs_list": [ 00:27:14.686 { 00:27:14.686 "name": "pt1", 00:27:14.686 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:14.686 "is_configured": true, 00:27:14.686 "data_offset": 256, 00:27:14.686 "data_size": 7936 00:27:14.686 }, 00:27:14.686 { 00:27:14.686 "name": "pt2", 00:27:14.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:14.686 "is_configured": true, 00:27:14.686 "data_offset": 256, 00:27:14.686 "data_size": 7936 00:27:14.686 } 00:27:14.686 ] 00:27:14.686 } 00:27:14.686 } 00:27:14.686 }' 00:27:14.686 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:14.686 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:27:14.686 pt2' 00:27:14.686 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:14.686 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:27:14.686 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:14.945 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:14.945 "name": "pt1", 00:27:14.945 "aliases": [ 00:27:14.945 "00000000-0000-0000-0000-000000000001" 00:27:14.945 ], 00:27:14.945 "product_name": "passthru", 00:27:14.945 "block_size": 4096, 00:27:14.945 "num_blocks": 8192, 00:27:14.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:14.945 "md_size": 32, 00:27:14.945 "md_interleave": false, 00:27:14.945 "dif_type": 0, 00:27:14.945 "assigned_rate_limits": { 00:27:14.945 "rw_ios_per_sec": 0, 00:27:14.945 "rw_mbytes_per_sec": 0, 00:27:14.945 "r_mbytes_per_sec": 0, 00:27:14.945 "w_mbytes_per_sec": 0 00:27:14.945 }, 00:27:14.945 "claimed": true, 00:27:14.945 "claim_type": "exclusive_write", 00:27:14.945 "zoned": false, 00:27:14.945 "supported_io_types": { 00:27:14.945 "read": true, 00:27:14.945 "write": true, 00:27:14.945 "unmap": true, 00:27:14.945 "flush": true, 00:27:14.945 "reset": true, 00:27:14.945 "nvme_admin": false, 00:27:14.945 "nvme_io": false, 00:27:14.945 "nvme_io_md": false, 00:27:14.945 "write_zeroes": true, 00:27:14.945 "zcopy": true, 00:27:14.945 "get_zone_info": false, 00:27:14.945 "zone_management": false, 00:27:14.945 "zone_append": false, 00:27:14.945 "compare": false, 00:27:14.945 "compare_and_write": false, 00:27:14.945 "abort": true, 00:27:14.945 "seek_hole": false, 00:27:14.945 "seek_data": false, 00:27:14.945 "copy": true, 00:27:14.945 "nvme_iov_md": false 00:27:14.945 }, 00:27:14.945 "memory_domains": [ 00:27:14.945 { 00:27:14.945 "dma_device_id": "system", 00:27:14.945 "dma_device_type": 1 00:27:14.945 }, 00:27:14.945 { 00:27:14.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:14.945 "dma_device_type": 2 00:27:14.945 } 00:27:14.945 ], 00:27:14.945 "driver_specific": { 00:27:14.945 "passthru": { 00:27:14.945 "name": "pt1", 00:27:14.945 "base_bdev_name": "malloc1" 00:27:14.945 } 00:27:14.945 } 00:27:14.945 }' 00:27:14.945 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:14.945 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:15.204 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:15.204 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:15.204 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:15.204 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:15.204 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:15.204 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:15.204 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:15.204 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:15.204 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:15.463 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:15.463 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:15.463 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:27:15.463 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:15.463 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:15.463 "name": "pt2", 00:27:15.463 "aliases": [ 00:27:15.463 "00000000-0000-0000-0000-000000000002" 00:27:15.463 ], 00:27:15.463 "product_name": "passthru", 00:27:15.463 "block_size": 4096, 00:27:15.463 "num_blocks": 8192, 00:27:15.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:15.463 "md_size": 32, 00:27:15.463 "md_interleave": false, 00:27:15.463 "dif_type": 0, 00:27:15.463 "assigned_rate_limits": { 00:27:15.463 "rw_ios_per_sec": 0, 00:27:15.463 "rw_mbytes_per_sec": 0, 00:27:15.463 "r_mbytes_per_sec": 0, 00:27:15.463 "w_mbytes_per_sec": 0 00:27:15.463 }, 00:27:15.463 "claimed": true, 00:27:15.463 "claim_type": "exclusive_write", 00:27:15.463 "zoned": false, 00:27:15.463 "supported_io_types": { 00:27:15.463 "read": true, 00:27:15.463 "write": true, 00:27:15.463 "unmap": true, 00:27:15.463 "flush": true, 00:27:15.463 "reset": true, 00:27:15.463 "nvme_admin": false, 00:27:15.463 "nvme_io": false, 00:27:15.463 "nvme_io_md": false, 00:27:15.463 "write_zeroes": true, 00:27:15.463 "zcopy": true, 00:27:15.463 "get_zone_info": false, 00:27:15.463 "zone_management": false, 00:27:15.463 "zone_append": false, 00:27:15.463 "compare": false, 00:27:15.463 "compare_and_write": false, 00:27:15.463 "abort": true, 00:27:15.463 "seek_hole": false, 00:27:15.463 "seek_data": false, 00:27:15.463 "copy": true, 00:27:15.463 "nvme_iov_md": false 00:27:15.463 }, 00:27:15.463 "memory_domains": [ 00:27:15.463 { 00:27:15.463 "dma_device_id": "system", 00:27:15.463 "dma_device_type": 1 00:27:15.463 }, 00:27:15.463 { 00:27:15.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:15.463 "dma_device_type": 2 00:27:15.463 } 00:27:15.463 ], 00:27:15.463 "driver_specific": { 00:27:15.463 "passthru": { 00:27:15.463 "name": "pt2", 00:27:15.463 "base_bdev_name": "malloc2" 00:27:15.463 } 00:27:15.463 } 00:27:15.463 }' 00:27:15.463 13:26:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:15.722 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:15.722 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:27:15.722 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:15.722 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:15.722 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:15.722 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:15.722 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:15.722 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:27:15.722 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:15.981 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:15.981 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:15.981 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:15.981 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:27:16.240 [2024-07-26 13:26:56.520866] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:16.240 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # '[' 5d72441a-32aa-4d3f-bd99-68e52ac65d1f '!=' 5d72441a-32aa-4d3f-bd99-68e52ac65d1f ']' 00:27:16.240 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:27:16.240 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:16.240 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:27:16.240 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:16.240 [2024-07-26 13:26:56.749255] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:16.500 "name": "raid_bdev1", 00:27:16.500 "uuid": "5d72441a-32aa-4d3f-bd99-68e52ac65d1f", 00:27:16.500 "strip_size_kb": 0, 00:27:16.500 "state": "online", 00:27:16.500 "raid_level": "raid1", 00:27:16.500 "superblock": true, 00:27:16.500 "num_base_bdevs": 2, 00:27:16.500 "num_base_bdevs_discovered": 1, 00:27:16.500 "num_base_bdevs_operational": 1, 00:27:16.500 "base_bdevs_list": [ 00:27:16.500 { 00:27:16.500 "name": null, 00:27:16.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:16.500 "is_configured": false, 00:27:16.500 "data_offset": 256, 00:27:16.500 "data_size": 7936 00:27:16.500 }, 00:27:16.500 { 00:27:16.500 "name": "pt2", 00:27:16.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:16.500 "is_configured": true, 00:27:16.500 "data_offset": 256, 00:27:16.500 "data_size": 7936 00:27:16.500 } 00:27:16.500 ] 00:27:16.500 }' 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:16.500 13:26:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:17.068 13:26:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:17.328 [2024-07-26 13:26:57.687691] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:17.328 [2024-07-26 13:26:57.687713] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:17.328 [2024-07-26 13:26:57.687761] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:17.328 [2024-07-26 13:26:57.687799] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:17.328 [2024-07-26 13:26:57.687809] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe8cf10 name raid_bdev1, state offline 00:27:17.328 13:26:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.328 13:26:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:27:17.587 13:26:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:27:17.587 13:26:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:27:17.587 13:26:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:27:17.587 13:26:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:27:17.587 13:26:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:17.846 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:27:17.846 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:27:17.846 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:27:17.846 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:27:17.846 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@534 -- # i=1 00:27:17.846 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:17.846 [2024-07-26 13:26:58.369461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:17.846 [2024-07-26 13:26:58.369503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.846 [2024-07-26 13:26:58.369518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd77ef0 00:27:17.846 [2024-07-26 13:26:58.369530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.846 [2024-07-26 13:26:58.370888] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.847 [2024-07-26 13:26:58.370914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:17.847 [2024-07-26 13:26:58.370959] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:17.847 [2024-07-26 13:26:58.370983] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:17.847 [2024-07-26 13:26:58.371060] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xe8d090 00:27:17.847 [2024-07-26 13:26:58.371070] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:17.847 [2024-07-26 13:26:58.371124] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe89cf0 00:27:17.847 [2024-07-26 13:26:58.371223] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe8d090 00:27:17.847 [2024-07-26 13:26:58.371233] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe8d090 00:27:17.847 [2024-07-26 13:26:58.371297] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:18.106 pt2 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:18.106 "name": "raid_bdev1", 00:27:18.106 "uuid": "5d72441a-32aa-4d3f-bd99-68e52ac65d1f", 00:27:18.106 "strip_size_kb": 0, 00:27:18.106 "state": "online", 00:27:18.106 "raid_level": "raid1", 00:27:18.106 "superblock": true, 00:27:18.106 "num_base_bdevs": 2, 00:27:18.106 "num_base_bdevs_discovered": 1, 00:27:18.106 "num_base_bdevs_operational": 1, 00:27:18.106 "base_bdevs_list": [ 00:27:18.106 { 00:27:18.106 "name": null, 00:27:18.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.106 "is_configured": false, 00:27:18.106 "data_offset": 256, 00:27:18.106 "data_size": 7936 00:27:18.106 }, 00:27:18.106 { 00:27:18.106 "name": "pt2", 00:27:18.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:18.106 "is_configured": true, 00:27:18.106 "data_offset": 256, 00:27:18.106 "data_size": 7936 00:27:18.106 } 00:27:18.106 ] 00:27:18.106 }' 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:18.106 13:26:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:18.740 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:19.000 [2024-07-26 13:26:59.335996] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:19.000 [2024-07-26 13:26:59.336020] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:19.000 [2024-07-26 13:26:59.336069] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:19.000 [2024-07-26 13:26:59.336109] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:19.000 [2024-07-26 13:26:59.336120] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe8d090 name raid_bdev1, state offline 00:27:19.000 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.000 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:27:19.259 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:27:19.259 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:27:19.259 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:27:19.259 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:19.518 [2024-07-26 13:26:59.805221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:19.518 [2024-07-26 13:26:59.805265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.518 [2024-07-26 13:26:59.805282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xcf61f0 00:27:19.518 [2024-07-26 13:26:59.805294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.518 [2024-07-26 13:26:59.806641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.518 [2024-07-26 13:26:59.806667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:19.518 [2024-07-26 13:26:59.806710] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:19.518 [2024-07-26 13:26:59.806732] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:19.518 [2024-07-26 13:26:59.806816] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:19.518 [2024-07-26 13:26:59.806828] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:19.518 [2024-07-26 13:26:59.806841] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcf6570 name raid_bdev1, state configuring 00:27:19.518 [2024-07-26 13:26:59.806862] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:19.518 [2024-07-26 13:26:59.806910] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xcf5c30 00:27:19.518 [2024-07-26 13:26:59.806919] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:19.518 [2024-07-26 13:26:59.806968] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcf6890 00:27:19.518 [2024-07-26 13:26:59.807054] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xcf5c30 00:27:19.518 [2024-07-26 13:26:59.807063] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xcf5c30 00:27:19.518 [2024-07-26 13:26:59.807132] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:19.518 pt1 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.518 13:26:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.777 13:27:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:19.777 "name": "raid_bdev1", 00:27:19.777 "uuid": "5d72441a-32aa-4d3f-bd99-68e52ac65d1f", 00:27:19.777 "strip_size_kb": 0, 00:27:19.777 "state": "online", 00:27:19.777 "raid_level": "raid1", 00:27:19.777 "superblock": true, 00:27:19.777 "num_base_bdevs": 2, 00:27:19.777 "num_base_bdevs_discovered": 1, 00:27:19.777 "num_base_bdevs_operational": 1, 00:27:19.777 "base_bdevs_list": [ 00:27:19.777 { 00:27:19.777 "name": null, 00:27:19.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:19.777 "is_configured": false, 00:27:19.777 "data_offset": 256, 00:27:19.777 "data_size": 7936 00:27:19.777 }, 00:27:19.777 { 00:27:19.777 "name": "pt2", 00:27:19.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:19.777 "is_configured": true, 00:27:19.777 "data_offset": 256, 00:27:19.777 "data_size": 7936 00:27:19.777 } 00:27:19.777 ] 00:27:19.777 }' 00:27:19.777 13:27:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:19.777 13:27:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:20.345 13:27:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:20.345 13:27:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:20.345 13:27:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:27:20.345 13:27:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:20.345 13:27:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:27:20.603 [2024-07-26 13:27:00.964588] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:20.604 13:27:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # '[' 5d72441a-32aa-4d3f-bd99-68e52ac65d1f '!=' 5d72441a-32aa-4d3f-bd99-68e52ac65d1f ']' 00:27:20.604 13:27:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@578 -- # killprocess 831563 00:27:20.604 13:27:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 831563 ']' 00:27:20.604 13:27:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 831563 00:27:20.604 13:27:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:27:20.604 13:27:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:20.604 13:27:00 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 831563 00:27:20.604 13:27:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:20.604 13:27:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:20.604 13:27:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 831563' 00:27:20.604 killing process with pid 831563 00:27:20.604 13:27:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 831563 00:27:20.604 [2024-07-26 13:27:01.047728] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:20.604 [2024-07-26 13:27:01.047778] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:20.604 [2024-07-26 13:27:01.047821] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:20.604 [2024-07-26 13:27:01.047832] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcf5c30 name raid_bdev1, state offline 00:27:20.604 13:27:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 831563 00:27:20.604 [2024-07-26 13:27:01.067811] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:20.863 13:27:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@580 -- # return 0 00:27:20.863 00:27:20.863 real 0m14.648s 00:27:20.863 user 0m26.497s 00:27:20.863 sys 0m2.757s 00:27:20.863 13:27:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:20.863 13:27:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:20.863 ************************************ 00:27:20.863 END TEST raid_superblock_test_md_separate 00:27:20.863 ************************************ 00:27:20.863 13:27:01 bdev_raid -- bdev/bdev_raid.sh@987 -- # '[' true = true ']' 00:27:20.863 13:27:01 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:27:20.863 13:27:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:20.863 13:27:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:20.864 13:27:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:20.864 ************************************ 00:27:20.864 START TEST raid_rebuild_test_sb_md_separate 00:27:20.864 ************************************ 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # local verify=true 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # local strip_size 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # local create_arg 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@594 -- # local data_offset 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # raid_pid=834469 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # waitforlisten 834469 /var/tmp/spdk-raid.sock 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 834469 ']' 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:20.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:20.864 13:27:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:21.124 [2024-07-26 13:27:01.394447] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:27:21.124 [2024-07-26 13:27:01.394509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834469 ] 00:27:21.124 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:21.124 Zero copy mechanism will not be used. 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:21.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:21.124 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:21.124 [2024-07-26 13:27:01.526272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.124 [2024-07-26 13:27:01.614109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.384 [2024-07-26 13:27:01.673798] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:21.384 [2024-07-26 13:27:01.673825] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:21.952 13:27:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:21.952 13:27:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:27:21.952 13:27:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:21.952 13:27:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:27:22.212 BaseBdev1_malloc 00:27:22.212 13:27:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:22.212 [2024-07-26 13:27:02.730693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:22.212 [2024-07-26 13:27:02.730735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.212 [2024-07-26 13:27:02.730757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2531fc0 00:27:22.212 [2024-07-26 13:27:02.730770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.212 [2024-07-26 13:27:02.732250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.212 [2024-07-26 13:27:02.732276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:22.212 BaseBdev1 00:27:22.471 13:27:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:22.471 13:27:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:27:22.471 BaseBdev2_malloc 00:27:22.471 13:27:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:22.730 [2024-07-26 13:27:03.197080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:22.730 [2024-07-26 13:27:03.197120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:22.730 [2024-07-26 13:27:03.197145] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26451f0 00:27:22.730 [2024-07-26 13:27:03.197157] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:22.730 [2024-07-26 13:27:03.198399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:22.730 [2024-07-26 13:27:03.198424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:22.730 BaseBdev2 00:27:22.730 13:27:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:27:22.988 spare_malloc 00:27:22.988 13:27:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:23.247 spare_delay 00:27:23.247 13:27:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:23.507 [2024-07-26 13:27:03.883920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:23.507 [2024-07-26 13:27:03.883960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.507 [2024-07-26 13:27:03.883983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2648230 00:27:23.507 [2024-07-26 13:27:03.883995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.507 [2024-07-26 13:27:03.885255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.507 [2024-07-26 13:27:03.885280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:23.507 spare 00:27:23.507 13:27:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:23.766 [2024-07-26 13:27:04.108539] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:23.766 [2024-07-26 13:27:04.109711] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:23.766 [2024-07-26 13:27:04.109854] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2648fb0 00:27:23.766 [2024-07-26 13:27:04.109866] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:23.766 [2024-07-26 13:27:04.109935] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x264be80 00:27:23.766 [2024-07-26 13:27:04.110037] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2648fb0 00:27:23.766 [2024-07-26 13:27:04.110046] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2648fb0 00:27:23.766 [2024-07-26 13:27:04.110122] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:23.766 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:23.766 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:23.766 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:23.766 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:23.766 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:23.766 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:23.766 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:23.766 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:23.766 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:23.766 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:23.766 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.766 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.025 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:24.025 "name": "raid_bdev1", 00:27:24.025 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:24.025 "strip_size_kb": 0, 00:27:24.025 "state": "online", 00:27:24.025 "raid_level": "raid1", 00:27:24.025 "superblock": true, 00:27:24.025 "num_base_bdevs": 2, 00:27:24.025 "num_base_bdevs_discovered": 2, 00:27:24.025 "num_base_bdevs_operational": 2, 00:27:24.025 "base_bdevs_list": [ 00:27:24.025 { 00:27:24.025 "name": "BaseBdev1", 00:27:24.025 "uuid": "efa8b366-cafb-5285-94c9-6fdd577a4403", 00:27:24.025 "is_configured": true, 00:27:24.025 "data_offset": 256, 00:27:24.025 "data_size": 7936 00:27:24.025 }, 00:27:24.025 { 00:27:24.025 "name": "BaseBdev2", 00:27:24.025 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:24.025 "is_configured": true, 00:27:24.025 "data_offset": 256, 00:27:24.025 "data_size": 7936 00:27:24.025 } 00:27:24.025 ] 00:27:24.025 }' 00:27:24.025 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:24.025 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:24.594 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:24.594 13:27:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:24.594 [2024-07-26 13:27:05.091535] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:24.594 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:27:24.594 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.594 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:24.853 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:27:24.853 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:27:24.854 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:27:24.854 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:27:24.854 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:24.854 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:24.854 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:24.854 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:24.854 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:24.854 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:24.854 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:27:24.854 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:24.854 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:24.854 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:25.113 [2024-07-26 13:27:05.544539] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x264bdb0 00:27:25.113 /dev/nbd0 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:25.113 1+0 records in 00:27:25.113 1+0 records out 00:27:25.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256214 s, 16.0 MB/s 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:27:25.113 13:27:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:27:26.050 7936+0 records in 00:27:26.050 7936+0 records out 00:27:26.050 32505856 bytes (33 MB, 31 MiB) copied, 0.679095 s, 47.9 MB/s 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:26.051 [2024-07-26 13:27:06.539792] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:26.051 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:26.310 [2024-07-26 13:27:06.700248] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:26.310 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:26.310 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:26.310 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:26.310 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:26.310 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:26.310 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:26.310 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:26.310 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:26.310 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:26.310 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:26.310 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.310 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.569 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:26.569 "name": "raid_bdev1", 00:27:26.569 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:26.569 "strip_size_kb": 0, 00:27:26.569 "state": "online", 00:27:26.569 "raid_level": "raid1", 00:27:26.569 "superblock": true, 00:27:26.569 "num_base_bdevs": 2, 00:27:26.569 "num_base_bdevs_discovered": 1, 00:27:26.569 "num_base_bdevs_operational": 1, 00:27:26.569 "base_bdevs_list": [ 00:27:26.569 { 00:27:26.569 "name": null, 00:27:26.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.569 "is_configured": false, 00:27:26.569 "data_offset": 256, 00:27:26.569 "data_size": 7936 00:27:26.569 }, 00:27:26.569 { 00:27:26.569 "name": "BaseBdev2", 00:27:26.569 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:26.569 "is_configured": true, 00:27:26.569 "data_offset": 256, 00:27:26.569 "data_size": 7936 00:27:26.569 } 00:27:26.569 ] 00:27:26.569 }' 00:27:26.569 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:26.569 13:27:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:27.137 13:27:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:27.704 [2024-07-26 13:27:07.963799] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:27.704 [2024-07-26 13:27:07.966025] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x264bd50 00:27:27.704 [2024-07-26 13:27:07.968053] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:27.704 13:27:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:28.640 13:27:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:28.640 13:27:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:28.641 13:27:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:28.641 13:27:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:28.641 13:27:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:28.641 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.641 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.899 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.899 "name": "raid_bdev1", 00:27:28.899 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:28.899 "strip_size_kb": 0, 00:27:28.899 "state": "online", 00:27:28.899 "raid_level": "raid1", 00:27:28.899 "superblock": true, 00:27:28.899 "num_base_bdevs": 2, 00:27:28.899 "num_base_bdevs_discovered": 2, 00:27:28.899 "num_base_bdevs_operational": 2, 00:27:28.899 "process": { 00:27:28.899 "type": "rebuild", 00:27:28.899 "target": "spare", 00:27:28.899 "progress": { 00:27:28.899 "blocks": 3072, 00:27:28.899 "percent": 38 00:27:28.899 } 00:27:28.899 }, 00:27:28.899 "base_bdevs_list": [ 00:27:28.899 { 00:27:28.899 "name": "spare", 00:27:28.899 "uuid": "ff26d632-4cfc-52e1-8c8c-ff25d0b096db", 00:27:28.899 "is_configured": true, 00:27:28.899 "data_offset": 256, 00:27:28.899 "data_size": 7936 00:27:28.899 }, 00:27:28.899 { 00:27:28.899 "name": "BaseBdev2", 00:27:28.899 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:28.899 "is_configured": true, 00:27:28.899 "data_offset": 256, 00:27:28.899 "data_size": 7936 00:27:28.899 } 00:27:28.899 ] 00:27:28.899 }' 00:27:28.899 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:28.899 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:28.899 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:28.899 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:28.899 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:29.158 [2024-07-26 13:27:09.528763] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:29.158 [2024-07-26 13:27:09.579953] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:29.158 [2024-07-26 13:27:09.579997] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.158 [2024-07-26 13:27:09.580011] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:29.158 [2024-07-26 13:27:09.580019] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:29.158 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:29.158 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:29.158 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:29.158 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:29.158 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:29.158 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:29.158 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:29.158 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:29.158 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:29.158 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:29.158 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.158 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.417 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:29.417 "name": "raid_bdev1", 00:27:29.417 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:29.417 "strip_size_kb": 0, 00:27:29.417 "state": "online", 00:27:29.417 "raid_level": "raid1", 00:27:29.417 "superblock": true, 00:27:29.417 "num_base_bdevs": 2, 00:27:29.417 "num_base_bdevs_discovered": 1, 00:27:29.417 "num_base_bdevs_operational": 1, 00:27:29.417 "base_bdevs_list": [ 00:27:29.417 { 00:27:29.417 "name": null, 00:27:29.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.417 "is_configured": false, 00:27:29.417 "data_offset": 256, 00:27:29.417 "data_size": 7936 00:27:29.417 }, 00:27:29.417 { 00:27:29.417 "name": "BaseBdev2", 00:27:29.417 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:29.417 "is_configured": true, 00:27:29.417 "data_offset": 256, 00:27:29.417 "data_size": 7936 00:27:29.417 } 00:27:29.417 ] 00:27:29.417 }' 00:27:29.417 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:29.417 13:27:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:29.984 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:29.984 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:29.984 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:29.984 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:29.984 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:29.984 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.984 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:30.243 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:30.243 "name": "raid_bdev1", 00:27:30.243 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:30.243 "strip_size_kb": 0, 00:27:30.243 "state": "online", 00:27:30.243 "raid_level": "raid1", 00:27:30.243 "superblock": true, 00:27:30.243 "num_base_bdevs": 2, 00:27:30.243 "num_base_bdevs_discovered": 1, 00:27:30.243 "num_base_bdevs_operational": 1, 00:27:30.243 "base_bdevs_list": [ 00:27:30.243 { 00:27:30.243 "name": null, 00:27:30.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.243 "is_configured": false, 00:27:30.243 "data_offset": 256, 00:27:30.243 "data_size": 7936 00:27:30.243 }, 00:27:30.243 { 00:27:30.243 "name": "BaseBdev2", 00:27:30.243 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:30.243 "is_configured": true, 00:27:30.243 "data_offset": 256, 00:27:30.243 "data_size": 7936 00:27:30.243 } 00:27:30.243 ] 00:27:30.243 }' 00:27:30.243 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:30.243 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:30.243 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:30.243 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:30.243 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:30.502 [2024-07-26 13:27:10.930473] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:30.502 [2024-07-26 13:27:10.932657] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24b0cd0 00:27:30.502 [2024-07-26 13:27:10.934002] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:30.502 13:27:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:31.438 13:27:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:31.438 13:27:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:31.438 13:27:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:31.438 13:27:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:31.438 13:27:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:31.438 13:27:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.438 13:27:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.697 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:31.697 "name": "raid_bdev1", 00:27:31.697 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:31.697 "strip_size_kb": 0, 00:27:31.697 "state": "online", 00:27:31.697 "raid_level": "raid1", 00:27:31.697 "superblock": true, 00:27:31.697 "num_base_bdevs": 2, 00:27:31.697 "num_base_bdevs_discovered": 2, 00:27:31.697 "num_base_bdevs_operational": 2, 00:27:31.697 "process": { 00:27:31.697 "type": "rebuild", 00:27:31.697 "target": "spare", 00:27:31.697 "progress": { 00:27:31.697 "blocks": 3072, 00:27:31.697 "percent": 38 00:27:31.697 } 00:27:31.697 }, 00:27:31.697 "base_bdevs_list": [ 00:27:31.697 { 00:27:31.697 "name": "spare", 00:27:31.697 "uuid": "ff26d632-4cfc-52e1-8c8c-ff25d0b096db", 00:27:31.697 "is_configured": true, 00:27:31.697 "data_offset": 256, 00:27:31.697 "data_size": 7936 00:27:31.697 }, 00:27:31.697 { 00:27:31.697 "name": "BaseBdev2", 00:27:31.697 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:31.697 "is_configured": true, 00:27:31.697 "data_offset": 256, 00:27:31.697 "data_size": 7936 00:27:31.697 } 00:27:31.697 ] 00:27:31.697 }' 00:27:31.697 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:31.956 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:31.956 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:31.956 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:31.956 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:27:31.957 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # local timeout=1018 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.957 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.215 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:32.216 "name": "raid_bdev1", 00:27:32.216 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:32.216 "strip_size_kb": 0, 00:27:32.216 "state": "online", 00:27:32.216 "raid_level": "raid1", 00:27:32.216 "superblock": true, 00:27:32.216 "num_base_bdevs": 2, 00:27:32.216 "num_base_bdevs_discovered": 2, 00:27:32.216 "num_base_bdevs_operational": 2, 00:27:32.216 "process": { 00:27:32.216 "type": "rebuild", 00:27:32.216 "target": "spare", 00:27:32.216 "progress": { 00:27:32.216 "blocks": 3840, 00:27:32.216 "percent": 48 00:27:32.216 } 00:27:32.216 }, 00:27:32.216 "base_bdevs_list": [ 00:27:32.216 { 00:27:32.216 "name": "spare", 00:27:32.216 "uuid": "ff26d632-4cfc-52e1-8c8c-ff25d0b096db", 00:27:32.216 "is_configured": true, 00:27:32.216 "data_offset": 256, 00:27:32.216 "data_size": 7936 00:27:32.216 }, 00:27:32.216 { 00:27:32.216 "name": "BaseBdev2", 00:27:32.216 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:32.216 "is_configured": true, 00:27:32.216 "data_offset": 256, 00:27:32.216 "data_size": 7936 00:27:32.216 } 00:27:32.216 ] 00:27:32.216 }' 00:27:32.216 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:32.216 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:32.216 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:32.216 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:32.216 13:27:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:33.153 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:33.153 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:33.153 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:33.153 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:33.153 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:33.153 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:33.153 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.153 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.412 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:33.412 "name": "raid_bdev1", 00:27:33.412 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:33.412 "strip_size_kb": 0, 00:27:33.412 "state": "online", 00:27:33.412 "raid_level": "raid1", 00:27:33.412 "superblock": true, 00:27:33.412 "num_base_bdevs": 2, 00:27:33.412 "num_base_bdevs_discovered": 2, 00:27:33.412 "num_base_bdevs_operational": 2, 00:27:33.412 "process": { 00:27:33.412 "type": "rebuild", 00:27:33.412 "target": "spare", 00:27:33.412 "progress": { 00:27:33.412 "blocks": 7168, 00:27:33.412 "percent": 90 00:27:33.412 } 00:27:33.412 }, 00:27:33.412 "base_bdevs_list": [ 00:27:33.412 { 00:27:33.412 "name": "spare", 00:27:33.412 "uuid": "ff26d632-4cfc-52e1-8c8c-ff25d0b096db", 00:27:33.412 "is_configured": true, 00:27:33.412 "data_offset": 256, 00:27:33.412 "data_size": 7936 00:27:33.412 }, 00:27:33.412 { 00:27:33.412 "name": "BaseBdev2", 00:27:33.412 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:33.412 "is_configured": true, 00:27:33.412 "data_offset": 256, 00:27:33.412 "data_size": 7936 00:27:33.412 } 00:27:33.412 ] 00:27:33.412 }' 00:27:33.412 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:33.412 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:33.412 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:33.412 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:33.412 13:27:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:33.671 [2024-07-26 13:27:14.056773] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:33.671 [2024-07-26 13:27:14.056831] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:33.671 [2024-07-26 13:27:14.056906] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:34.618 13:27:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:34.618 13:27:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:34.618 13:27:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:34.618 13:27:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:34.618 13:27:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:34.618 13:27:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:34.618 13:27:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.618 13:27:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:34.878 "name": "raid_bdev1", 00:27:34.878 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:34.878 "strip_size_kb": 0, 00:27:34.878 "state": "online", 00:27:34.878 "raid_level": "raid1", 00:27:34.878 "superblock": true, 00:27:34.878 "num_base_bdevs": 2, 00:27:34.878 "num_base_bdevs_discovered": 2, 00:27:34.878 "num_base_bdevs_operational": 2, 00:27:34.878 "base_bdevs_list": [ 00:27:34.878 { 00:27:34.878 "name": "spare", 00:27:34.878 "uuid": "ff26d632-4cfc-52e1-8c8c-ff25d0b096db", 00:27:34.878 "is_configured": true, 00:27:34.878 "data_offset": 256, 00:27:34.878 "data_size": 7936 00:27:34.878 }, 00:27:34.878 { 00:27:34.878 "name": "BaseBdev2", 00:27:34.878 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:34.878 "is_configured": true, 00:27:34.878 "data_offset": 256, 00:27:34.878 "data_size": 7936 00:27:34.878 } 00:27:34.878 ] 00:27:34.878 }' 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@724 -- # break 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.878 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.137 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:35.137 "name": "raid_bdev1", 00:27:35.137 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:35.137 "strip_size_kb": 0, 00:27:35.137 "state": "online", 00:27:35.137 "raid_level": "raid1", 00:27:35.137 "superblock": true, 00:27:35.137 "num_base_bdevs": 2, 00:27:35.137 "num_base_bdevs_discovered": 2, 00:27:35.137 "num_base_bdevs_operational": 2, 00:27:35.137 "base_bdevs_list": [ 00:27:35.137 { 00:27:35.137 "name": "spare", 00:27:35.137 "uuid": "ff26d632-4cfc-52e1-8c8c-ff25d0b096db", 00:27:35.137 "is_configured": true, 00:27:35.137 "data_offset": 256, 00:27:35.137 "data_size": 7936 00:27:35.137 }, 00:27:35.137 { 00:27:35.137 "name": "BaseBdev2", 00:27:35.137 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:35.137 "is_configured": true, 00:27:35.137 "data_offset": 256, 00:27:35.137 "data_size": 7936 00:27:35.137 } 00:27:35.137 ] 00:27:35.137 }' 00:27:35.137 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:35.137 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:35.137 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:35.137 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:35.137 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:35.137 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:35.137 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:35.138 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:35.138 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:35.138 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:35.138 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:35.138 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:35.138 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:35.138 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:35.138 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.138 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:35.397 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:35.397 "name": "raid_bdev1", 00:27:35.397 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:35.397 "strip_size_kb": 0, 00:27:35.397 "state": "online", 00:27:35.397 "raid_level": "raid1", 00:27:35.397 "superblock": true, 00:27:35.397 "num_base_bdevs": 2, 00:27:35.397 "num_base_bdevs_discovered": 2, 00:27:35.397 "num_base_bdevs_operational": 2, 00:27:35.397 "base_bdevs_list": [ 00:27:35.397 { 00:27:35.397 "name": "spare", 00:27:35.397 "uuid": "ff26d632-4cfc-52e1-8c8c-ff25d0b096db", 00:27:35.397 "is_configured": true, 00:27:35.397 "data_offset": 256, 00:27:35.397 "data_size": 7936 00:27:35.397 }, 00:27:35.397 { 00:27:35.397 "name": "BaseBdev2", 00:27:35.397 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:35.397 "is_configured": true, 00:27:35.397 "data_offset": 256, 00:27:35.397 "data_size": 7936 00:27:35.397 } 00:27:35.397 ] 00:27:35.397 }' 00:27:35.397 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:35.397 13:27:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:35.967 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:36.259 [2024-07-26 13:27:16.563634] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:36.259 [2024-07-26 13:27:16.563658] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:36.259 [2024-07-26 13:27:16.563715] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:36.259 [2024-07-26 13:27:16.563770] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:36.259 [2024-07-26 13:27:16.563781] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2648fb0 name raid_bdev1, state offline 00:27:36.259 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # jq length 00:27:36.259 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:36.519 13:27:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:36.779 /dev/nbd0 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.779 1+0 records in 00:27:36.779 1+0 records out 00:27:36.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231257 s, 17.7 MB/s 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:36.779 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:37.038 /dev/nbd1 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:37.038 1+0 records in 00:27:37.038 1+0 records out 00:27:37.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310198 s, 13.2 MB/s 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:27:37.038 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:37.039 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:37.297 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:37.297 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:37.297 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:37.298 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:37.298 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:37.298 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:37.298 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:37.298 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:37.298 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:37.298 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:37.557 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:37.557 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:37.557 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:37.557 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:37.557 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:37.557 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:37.557 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:27:37.557 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:27:37.557 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:27:37.557 13:27:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:37.816 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:38.074 [2024-07-26 13:27:18.377845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:38.074 [2024-07-26 13:27:18.377887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.074 [2024-07-26 13:27:18.377906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x264b3e0 00:27:38.074 [2024-07-26 13:27:18.377917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.074 [2024-07-26 13:27:18.379286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.074 [2024-07-26 13:27:18.379312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:38.074 [2024-07-26 13:27:18.379365] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:38.074 [2024-07-26 13:27:18.379390] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:38.074 [2024-07-26 13:27:18.379479] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:38.074 spare 00:27:38.074 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:38.074 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:38.075 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:38.075 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:38.075 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:38.075 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:38.075 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:38.075 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:38.075 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:38.075 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:38.075 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.075 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:38.075 [2024-07-26 13:27:18.479792] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x264a5c0 00:27:38.075 [2024-07-26 13:27:18.479805] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:27:38.075 [2024-07-26 13:27:18.479864] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25b6250 00:27:38.075 [2024-07-26 13:27:18.479971] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x264a5c0 00:27:38.075 [2024-07-26 13:27:18.479980] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x264a5c0 00:27:38.075 [2024-07-26 13:27:18.480050] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:38.334 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:38.334 "name": "raid_bdev1", 00:27:38.334 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:38.334 "strip_size_kb": 0, 00:27:38.334 "state": "online", 00:27:38.334 "raid_level": "raid1", 00:27:38.334 "superblock": true, 00:27:38.334 "num_base_bdevs": 2, 00:27:38.334 "num_base_bdevs_discovered": 2, 00:27:38.334 "num_base_bdevs_operational": 2, 00:27:38.334 "base_bdevs_list": [ 00:27:38.334 { 00:27:38.334 "name": "spare", 00:27:38.334 "uuid": "ff26d632-4cfc-52e1-8c8c-ff25d0b096db", 00:27:38.334 "is_configured": true, 00:27:38.334 "data_offset": 256, 00:27:38.334 "data_size": 7936 00:27:38.334 }, 00:27:38.334 { 00:27:38.334 "name": "BaseBdev2", 00:27:38.334 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:38.334 "is_configured": true, 00:27:38.334 "data_offset": 256, 00:27:38.334 "data_size": 7936 00:27:38.334 } 00:27:38.334 ] 00:27:38.334 }' 00:27:38.334 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:38.334 13:27:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:38.903 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:38.903 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:38.903 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:38.903 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:38.903 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:38.903 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:38.903 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.162 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:39.162 "name": "raid_bdev1", 00:27:39.162 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:39.162 "strip_size_kb": 0, 00:27:39.162 "state": "online", 00:27:39.162 "raid_level": "raid1", 00:27:39.162 "superblock": true, 00:27:39.162 "num_base_bdevs": 2, 00:27:39.162 "num_base_bdevs_discovered": 2, 00:27:39.162 "num_base_bdevs_operational": 2, 00:27:39.162 "base_bdevs_list": [ 00:27:39.162 { 00:27:39.162 "name": "spare", 00:27:39.162 "uuid": "ff26d632-4cfc-52e1-8c8c-ff25d0b096db", 00:27:39.162 "is_configured": true, 00:27:39.162 "data_offset": 256, 00:27:39.162 "data_size": 7936 00:27:39.162 }, 00:27:39.162 { 00:27:39.162 "name": "BaseBdev2", 00:27:39.162 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:39.162 "is_configured": true, 00:27:39.162 "data_offset": 256, 00:27:39.162 "data_size": 7936 00:27:39.162 } 00:27:39.162 ] 00:27:39.162 }' 00:27:39.162 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:39.162 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:39.163 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:39.163 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:39.163 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.163 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:39.422 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:27:39.422 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:39.682 [2024-07-26 13:27:19.970186] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:39.682 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:39.682 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:39.682 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:39.682 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:39.682 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:39.682 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:39.682 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:39.682 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:39.682 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:39.682 13:27:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:39.682 13:27:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.682 13:27:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.941 13:27:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:39.941 "name": "raid_bdev1", 00:27:39.941 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:39.941 "strip_size_kb": 0, 00:27:39.941 "state": "online", 00:27:39.941 "raid_level": "raid1", 00:27:39.941 "superblock": true, 00:27:39.941 "num_base_bdevs": 2, 00:27:39.941 "num_base_bdevs_discovered": 1, 00:27:39.941 "num_base_bdevs_operational": 1, 00:27:39.941 "base_bdevs_list": [ 00:27:39.941 { 00:27:39.941 "name": null, 00:27:39.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:39.941 "is_configured": false, 00:27:39.941 "data_offset": 256, 00:27:39.941 "data_size": 7936 00:27:39.941 }, 00:27:39.941 { 00:27:39.941 "name": "BaseBdev2", 00:27:39.941 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:39.941 "is_configured": true, 00:27:39.941 "data_offset": 256, 00:27:39.941 "data_size": 7936 00:27:39.941 } 00:27:39.941 ] 00:27:39.941 }' 00:27:39.941 13:27:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:39.941 13:27:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:40.510 13:27:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:40.510 [2024-07-26 13:27:21.013078] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:40.510 [2024-07-26 13:27:21.013223] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:40.510 [2024-07-26 13:27:21.013240] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:40.510 [2024-07-26 13:27:21.013266] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:40.510 [2024-07-26 13:27:21.015352] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25b6250 00:27:40.510 [2024-07-26 13:27:21.017523] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:40.510 13:27:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # sleep 1 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:41.890 "name": "raid_bdev1", 00:27:41.890 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:41.890 "strip_size_kb": 0, 00:27:41.890 "state": "online", 00:27:41.890 "raid_level": "raid1", 00:27:41.890 "superblock": true, 00:27:41.890 "num_base_bdevs": 2, 00:27:41.890 "num_base_bdevs_discovered": 2, 00:27:41.890 "num_base_bdevs_operational": 2, 00:27:41.890 "process": { 00:27:41.890 "type": "rebuild", 00:27:41.890 "target": "spare", 00:27:41.890 "progress": { 00:27:41.890 "blocks": 3072, 00:27:41.890 "percent": 38 00:27:41.890 } 00:27:41.890 }, 00:27:41.890 "base_bdevs_list": [ 00:27:41.890 { 00:27:41.890 "name": "spare", 00:27:41.890 "uuid": "ff26d632-4cfc-52e1-8c8c-ff25d0b096db", 00:27:41.890 "is_configured": true, 00:27:41.890 "data_offset": 256, 00:27:41.890 "data_size": 7936 00:27:41.890 }, 00:27:41.890 { 00:27:41.890 "name": "BaseBdev2", 00:27:41.890 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:41.890 "is_configured": true, 00:27:41.890 "data_offset": 256, 00:27:41.890 "data_size": 7936 00:27:41.890 } 00:27:41.890 ] 00:27:41.890 }' 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:41.890 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:42.150 [2024-07-26 13:27:22.558132] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:42.150 [2024-07-26 13:27:22.629327] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:42.150 [2024-07-26 13:27:22.629376] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:42.150 [2024-07-26 13:27:22.629391] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:42.150 [2024-07-26 13:27:22.629399] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:42.150 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:42.150 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:42.150 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:42.150 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:42.150 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:42.150 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:42.150 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:42.150 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:42.150 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:42.150 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:42.150 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.150 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.410 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:42.410 "name": "raid_bdev1", 00:27:42.410 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:42.410 "strip_size_kb": 0, 00:27:42.410 "state": "online", 00:27:42.410 "raid_level": "raid1", 00:27:42.410 "superblock": true, 00:27:42.410 "num_base_bdevs": 2, 00:27:42.410 "num_base_bdevs_discovered": 1, 00:27:42.410 "num_base_bdevs_operational": 1, 00:27:42.410 "base_bdevs_list": [ 00:27:42.410 { 00:27:42.410 "name": null, 00:27:42.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.410 "is_configured": false, 00:27:42.410 "data_offset": 256, 00:27:42.410 "data_size": 7936 00:27:42.410 }, 00:27:42.410 { 00:27:42.410 "name": "BaseBdev2", 00:27:42.410 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:42.410 "is_configured": true, 00:27:42.410 "data_offset": 256, 00:27:42.410 "data_size": 7936 00:27:42.410 } 00:27:42.410 ] 00:27:42.410 }' 00:27:42.410 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:42.410 13:27:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:42.977 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:43.235 [2024-07-26 13:27:23.659025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:43.235 [2024-07-26 13:27:23.659070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:43.235 [2024-07-26 13:27:23.659094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25b6050 00:27:43.235 [2024-07-26 13:27:23.659106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:43.235 [2024-07-26 13:27:23.659312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:43.235 [2024-07-26 13:27:23.659327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:43.235 [2024-07-26 13:27:23.659381] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:43.235 [2024-07-26 13:27:23.659391] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:43.235 [2024-07-26 13:27:23.659401] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:43.235 [2024-07-26 13:27:23.659417] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:43.235 [2024-07-26 13:27:23.661517] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x24b0830 00:27:43.235 [2024-07-26 13:27:23.662867] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:43.235 spare 00:27:43.235 13:27:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # sleep 1 00:27:44.173 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:44.173 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:44.173 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:44.173 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:44.173 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:44.173 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.173 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.432 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:44.432 "name": "raid_bdev1", 00:27:44.432 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:44.432 "strip_size_kb": 0, 00:27:44.432 "state": "online", 00:27:44.432 "raid_level": "raid1", 00:27:44.432 "superblock": true, 00:27:44.432 "num_base_bdevs": 2, 00:27:44.432 "num_base_bdevs_discovered": 2, 00:27:44.432 "num_base_bdevs_operational": 2, 00:27:44.432 "process": { 00:27:44.432 "type": "rebuild", 00:27:44.432 "target": "spare", 00:27:44.432 "progress": { 00:27:44.432 "blocks": 3072, 00:27:44.432 "percent": 38 00:27:44.432 } 00:27:44.432 }, 00:27:44.432 "base_bdevs_list": [ 00:27:44.432 { 00:27:44.432 "name": "spare", 00:27:44.432 "uuid": "ff26d632-4cfc-52e1-8c8c-ff25d0b096db", 00:27:44.432 "is_configured": true, 00:27:44.432 "data_offset": 256, 00:27:44.432 "data_size": 7936 00:27:44.432 }, 00:27:44.432 { 00:27:44.432 "name": "BaseBdev2", 00:27:44.432 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:44.432 "is_configured": true, 00:27:44.432 "data_offset": 256, 00:27:44.432 "data_size": 7936 00:27:44.432 } 00:27:44.432 ] 00:27:44.432 }' 00:27:44.432 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:44.692 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:44.692 13:27:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.692 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:44.692 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:44.951 [2024-07-26 13:27:25.220432] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:44.951 [2024-07-26 13:27:25.274705] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:44.951 [2024-07-26 13:27:25.274750] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:44.951 [2024-07-26 13:27:25.274764] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:44.951 [2024-07-26 13:27:25.274772] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:44.951 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:44.951 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:44.951 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:44.951 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:44.951 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:44.951 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:44.951 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:44.951 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:44.951 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:44.951 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:44.951 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.951 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.210 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:45.210 "name": "raid_bdev1", 00:27:45.210 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:45.210 "strip_size_kb": 0, 00:27:45.210 "state": "online", 00:27:45.210 "raid_level": "raid1", 00:27:45.210 "superblock": true, 00:27:45.210 "num_base_bdevs": 2, 00:27:45.210 "num_base_bdevs_discovered": 1, 00:27:45.210 "num_base_bdevs_operational": 1, 00:27:45.210 "base_bdevs_list": [ 00:27:45.210 { 00:27:45.210 "name": null, 00:27:45.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.210 "is_configured": false, 00:27:45.210 "data_offset": 256, 00:27:45.210 "data_size": 7936 00:27:45.210 }, 00:27:45.210 { 00:27:45.210 "name": "BaseBdev2", 00:27:45.211 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:45.211 "is_configured": true, 00:27:45.211 "data_offset": 256, 00:27:45.211 "data_size": 7936 00:27:45.211 } 00:27:45.211 ] 00:27:45.211 }' 00:27:45.211 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:45.211 13:27:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:45.779 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:45.779 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.779 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:45.779 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:45.779 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.779 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.779 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.060 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:46.060 "name": "raid_bdev1", 00:27:46.060 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:46.060 "strip_size_kb": 0, 00:27:46.060 "state": "online", 00:27:46.060 "raid_level": "raid1", 00:27:46.060 "superblock": true, 00:27:46.060 "num_base_bdevs": 2, 00:27:46.060 "num_base_bdevs_discovered": 1, 00:27:46.060 "num_base_bdevs_operational": 1, 00:27:46.060 "base_bdevs_list": [ 00:27:46.060 { 00:27:46.060 "name": null, 00:27:46.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.060 "is_configured": false, 00:27:46.060 "data_offset": 256, 00:27:46.060 "data_size": 7936 00:27:46.060 }, 00:27:46.060 { 00:27:46.060 "name": "BaseBdev2", 00:27:46.060 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:46.060 "is_configured": true, 00:27:46.060 "data_offset": 256, 00:27:46.060 "data_size": 7936 00:27:46.060 } 00:27:46.060 ] 00:27:46.060 }' 00:27:46.060 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:46.060 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:46.060 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:46.060 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:46.060 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:46.320 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:46.579 [2024-07-26 13:27:26.861976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:46.579 [2024-07-26 13:27:26.862022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:46.579 [2024-07-26 13:27:26.862041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24b1930 00:27:46.579 [2024-07-26 13:27:26.862053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:46.579 [2024-07-26 13:27:26.862227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:46.579 [2024-07-26 13:27:26.862247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:46.579 [2024-07-26 13:27:26.862288] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:46.579 [2024-07-26 13:27:26.862298] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:46.579 [2024-07-26 13:27:26.862308] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:46.579 BaseBdev1 00:27:46.579 13:27:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@789 -- # sleep 1 00:27:47.517 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:47.517 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:47.517 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:47.517 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:47.517 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:47.517 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:47.517 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:47.517 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:47.517 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:47.517 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:47.517 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.517 13:27:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.776 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:47.776 "name": "raid_bdev1", 00:27:47.776 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:47.776 "strip_size_kb": 0, 00:27:47.776 "state": "online", 00:27:47.776 "raid_level": "raid1", 00:27:47.776 "superblock": true, 00:27:47.776 "num_base_bdevs": 2, 00:27:47.776 "num_base_bdevs_discovered": 1, 00:27:47.776 "num_base_bdevs_operational": 1, 00:27:47.776 "base_bdevs_list": [ 00:27:47.776 { 00:27:47.776 "name": null, 00:27:47.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:47.776 "is_configured": false, 00:27:47.776 "data_offset": 256, 00:27:47.776 "data_size": 7936 00:27:47.776 }, 00:27:47.776 { 00:27:47.776 "name": "BaseBdev2", 00:27:47.776 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:47.776 "is_configured": true, 00:27:47.776 "data_offset": 256, 00:27:47.776 "data_size": 7936 00:27:47.776 } 00:27:47.776 ] 00:27:47.776 }' 00:27:47.776 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:47.776 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:48.344 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:48.344 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:48.344 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:48.344 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:48.344 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:48.344 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.344 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.603 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:48.603 "name": "raid_bdev1", 00:27:48.603 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:48.603 "strip_size_kb": 0, 00:27:48.603 "state": "online", 00:27:48.603 "raid_level": "raid1", 00:27:48.603 "superblock": true, 00:27:48.603 "num_base_bdevs": 2, 00:27:48.603 "num_base_bdevs_discovered": 1, 00:27:48.603 "num_base_bdevs_operational": 1, 00:27:48.603 "base_bdevs_list": [ 00:27:48.603 { 00:27:48.603 "name": null, 00:27:48.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.603 "is_configured": false, 00:27:48.603 "data_offset": 256, 00:27:48.603 "data_size": 7936 00:27:48.603 }, 00:27:48.603 { 00:27:48.603 "name": "BaseBdev2", 00:27:48.603 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:48.603 "is_configured": true, 00:27:48.603 "data_offset": 256, 00:27:48.603 "data_size": 7936 00:27:48.603 } 00:27:48.603 ] 00:27:48.603 }' 00:27:48.603 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:48.603 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:48.603 13:27:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:27:48.603 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:48.863 [2024-07-26 13:27:29.232230] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:48.863 [2024-07-26 13:27:29.232344] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:48.863 [2024-07-26 13:27:29.232359] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:48.863 request: 00:27:48.863 { 00:27:48.863 "base_bdev": "BaseBdev1", 00:27:48.863 "raid_bdev": "raid_bdev1", 00:27:48.863 "method": "bdev_raid_add_base_bdev", 00:27:48.863 "req_id": 1 00:27:48.863 } 00:27:48.863 Got JSON-RPC error response 00:27:48.863 response: 00:27:48.863 { 00:27:48.863 "code": -22, 00:27:48.863 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:48.863 } 00:27:48.863 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:27:48.863 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:48.863 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:48.863 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:48.863 13:27:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@793 -- # sleep 1 00:27:49.832 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:49.832 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:49.832 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:49.832 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:49.832 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:49.832 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:49.832 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:49.832 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:49.832 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:49.832 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:49.832 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.832 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.092 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:50.092 "name": "raid_bdev1", 00:27:50.092 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:50.092 "strip_size_kb": 0, 00:27:50.092 "state": "online", 00:27:50.092 "raid_level": "raid1", 00:27:50.092 "superblock": true, 00:27:50.092 "num_base_bdevs": 2, 00:27:50.092 "num_base_bdevs_discovered": 1, 00:27:50.092 "num_base_bdevs_operational": 1, 00:27:50.092 "base_bdevs_list": [ 00:27:50.092 { 00:27:50.092 "name": null, 00:27:50.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.092 "is_configured": false, 00:27:50.092 "data_offset": 256, 00:27:50.092 "data_size": 7936 00:27:50.092 }, 00:27:50.092 { 00:27:50.092 "name": "BaseBdev2", 00:27:50.092 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:50.092 "is_configured": true, 00:27:50.092 "data_offset": 256, 00:27:50.092 "data_size": 7936 00:27:50.092 } 00:27:50.092 ] 00:27:50.092 }' 00:27:50.092 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:50.092 13:27:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:50.661 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:50.661 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:50.661 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:50.661 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:50.661 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:50.661 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.661 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:50.921 "name": "raid_bdev1", 00:27:50.921 "uuid": "3c43e0f3-9fd8-462b-a66d-9325101668a1", 00:27:50.921 "strip_size_kb": 0, 00:27:50.921 "state": "online", 00:27:50.921 "raid_level": "raid1", 00:27:50.921 "superblock": true, 00:27:50.921 "num_base_bdevs": 2, 00:27:50.921 "num_base_bdevs_discovered": 1, 00:27:50.921 "num_base_bdevs_operational": 1, 00:27:50.921 "base_bdevs_list": [ 00:27:50.921 { 00:27:50.921 "name": null, 00:27:50.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:50.921 "is_configured": false, 00:27:50.921 "data_offset": 256, 00:27:50.921 "data_size": 7936 00:27:50.921 }, 00:27:50.921 { 00:27:50.921 "name": "BaseBdev2", 00:27:50.921 "uuid": "57d98246-dda4-5887-8efd-f26e34f8c229", 00:27:50.921 "is_configured": true, 00:27:50.921 "data_offset": 256, 00:27:50.921 "data_size": 7936 00:27:50.921 } 00:27:50.921 ] 00:27:50.921 }' 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@798 -- # killprocess 834469 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 834469 ']' 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 834469 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 834469 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 834469' 00:27:50.921 killing process with pid 834469 00:27:50.921 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 834469 00:27:50.921 Received shutdown signal, test time was about 60.000000 seconds 00:27:50.921 00:27:50.921 Latency(us) 00:27:50.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.921 =================================================================================================================== 00:27:50.921 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:50.921 [2024-07-26 13:27:31.420712] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:50.922 [2024-07-26 13:27:31.420797] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:50.922 [2024-07-26 13:27:31.420838] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:50.922 [2024-07-26 13:27:31.420849] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x264a5c0 name raid_bdev1, state offline 00:27:50.922 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 834469 00:27:51.181 [2024-07-26 13:27:31.448907] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:51.181 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@800 -- # return 0 00:27:51.181 00:27:51.181 real 0m30.310s 00:27:51.181 user 0m46.896s 00:27:51.181 sys 0m4.959s 00:27:51.181 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:51.181 13:27:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:27:51.181 ************************************ 00:27:51.181 END TEST raid_rebuild_test_sb_md_separate 00:27:51.181 ************************************ 00:27:51.181 13:27:31 bdev_raid -- bdev/bdev_raid.sh@991 -- # base_malloc_params='-m 32 -i' 00:27:51.181 13:27:31 bdev_raid -- bdev/bdev_raid.sh@992 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:27:51.181 13:27:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:51.181 13:27:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:51.181 13:27:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:51.441 ************************************ 00:27:51.441 START TEST raid_state_function_test_sb_md_interleaved 00:27:51.441 ************************************ 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:27:51.441 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:27:51.442 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:27:51.442 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=840428 00:27:51.442 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 840428' 00:27:51.442 Process raid pid: 840428 00:27:51.442 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:27:51.442 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 840428 /var/tmp/spdk-raid.sock 00:27:51.442 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 840428 ']' 00:27:51.442 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:51.442 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:51.442 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:51.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:51.442 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:51.442 13:27:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:51.442 [2024-07-26 13:27:31.793224] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:27:51.442 [2024-07-26 13:27:31.793280] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:51.442 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:51.442 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:51.442 [2024-07-26 13:27:31.924889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.702 [2024-07-26 13:27:32.012020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.702 [2024-07-26 13:27:32.070429] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:51.702 [2024-07-26 13:27:32.070465] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:52.270 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:52.270 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:27:52.270 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:52.530 [2024-07-26 13:27:32.904825] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:52.530 [2024-07-26 13:27:32.904862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:52.530 [2024-07-26 13:27:32.904872] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:52.530 [2024-07-26 13:27:32.904883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:52.530 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:52.530 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:52.530 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:52.530 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:52.530 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:52.530 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:52.530 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:52.530 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:52.530 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:52.530 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:52.530 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.530 13:27:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:52.789 13:27:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:52.789 "name": "Existed_Raid", 00:27:52.789 "uuid": "6815f9c3-b0f7-46df-98c1-9daf982a81bc", 00:27:52.789 "strip_size_kb": 0, 00:27:52.789 "state": "configuring", 00:27:52.789 "raid_level": "raid1", 00:27:52.789 "superblock": true, 00:27:52.789 "num_base_bdevs": 2, 00:27:52.789 "num_base_bdevs_discovered": 0, 00:27:52.789 "num_base_bdevs_operational": 2, 00:27:52.789 "base_bdevs_list": [ 00:27:52.789 { 00:27:52.789 "name": "BaseBdev1", 00:27:52.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.789 "is_configured": false, 00:27:52.789 "data_offset": 0, 00:27:52.789 "data_size": 0 00:27:52.789 }, 00:27:52.790 { 00:27:52.790 "name": "BaseBdev2", 00:27:52.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.790 "is_configured": false, 00:27:52.790 "data_offset": 0, 00:27:52.790 "data_size": 0 00:27:52.790 } 00:27:52.790 ] 00:27:52.790 }' 00:27:52.790 13:27:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:52.790 13:27:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:53.357 13:27:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:53.616 [2024-07-26 13:27:33.935421] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:53.616 [2024-07-26 13:27:33.935450] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa50f20 name Existed_Raid, state configuring 00:27:53.616 13:27:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:53.875 [2024-07-26 13:27:34.168050] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:53.875 [2024-07-26 13:27:34.168076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:53.875 [2024-07-26 13:27:34.168085] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:53.875 [2024-07-26 13:27:34.168095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:53.875 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:27:54.134 [2024-07-26 13:27:34.402122] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:54.134 BaseBdev1 00:27:54.134 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:27:54.134 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:27:54.134 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:54.134 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:27:54.134 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:54.134 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:54.134 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:54.134 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:54.393 [ 00:27:54.393 { 00:27:54.393 "name": "BaseBdev1", 00:27:54.393 "aliases": [ 00:27:54.393 "0c4e9d08-7aea-4b7a-a87e-2ccd0e6e105c" 00:27:54.393 ], 00:27:54.393 "product_name": "Malloc disk", 00:27:54.393 "block_size": 4128, 00:27:54.393 "num_blocks": 8192, 00:27:54.393 "uuid": "0c4e9d08-7aea-4b7a-a87e-2ccd0e6e105c", 00:27:54.393 "md_size": 32, 00:27:54.393 "md_interleave": true, 00:27:54.393 "dif_type": 0, 00:27:54.393 "assigned_rate_limits": { 00:27:54.393 "rw_ios_per_sec": 0, 00:27:54.393 "rw_mbytes_per_sec": 0, 00:27:54.393 "r_mbytes_per_sec": 0, 00:27:54.393 "w_mbytes_per_sec": 0 00:27:54.393 }, 00:27:54.393 "claimed": true, 00:27:54.393 "claim_type": "exclusive_write", 00:27:54.393 "zoned": false, 00:27:54.393 "supported_io_types": { 00:27:54.393 "read": true, 00:27:54.393 "write": true, 00:27:54.393 "unmap": true, 00:27:54.393 "flush": true, 00:27:54.393 "reset": true, 00:27:54.393 "nvme_admin": false, 00:27:54.394 "nvme_io": false, 00:27:54.394 "nvme_io_md": false, 00:27:54.394 "write_zeroes": true, 00:27:54.394 "zcopy": true, 00:27:54.394 "get_zone_info": false, 00:27:54.394 "zone_management": false, 00:27:54.394 "zone_append": false, 00:27:54.394 "compare": false, 00:27:54.394 "compare_and_write": false, 00:27:54.394 "abort": true, 00:27:54.394 "seek_hole": false, 00:27:54.394 "seek_data": false, 00:27:54.394 "copy": true, 00:27:54.394 "nvme_iov_md": false 00:27:54.394 }, 00:27:54.394 "memory_domains": [ 00:27:54.394 { 00:27:54.394 "dma_device_id": "system", 00:27:54.394 "dma_device_type": 1 00:27:54.394 }, 00:27:54.394 { 00:27:54.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:54.394 "dma_device_type": 2 00:27:54.394 } 00:27:54.394 ], 00:27:54.394 "driver_specific": {} 00:27:54.394 } 00:27:54.394 ] 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:54.394 13:27:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:54.653 13:27:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:54.653 "name": "Existed_Raid", 00:27:54.653 "uuid": "af936301-917b-48ed-912d-b344e67e9fa5", 00:27:54.653 "strip_size_kb": 0, 00:27:54.653 "state": "configuring", 00:27:54.653 "raid_level": "raid1", 00:27:54.653 "superblock": true, 00:27:54.653 "num_base_bdevs": 2, 00:27:54.653 "num_base_bdevs_discovered": 1, 00:27:54.653 "num_base_bdevs_operational": 2, 00:27:54.653 "base_bdevs_list": [ 00:27:54.653 { 00:27:54.653 "name": "BaseBdev1", 00:27:54.653 "uuid": "0c4e9d08-7aea-4b7a-a87e-2ccd0e6e105c", 00:27:54.653 "is_configured": true, 00:27:54.653 "data_offset": 256, 00:27:54.653 "data_size": 7936 00:27:54.653 }, 00:27:54.653 { 00:27:54.653 "name": "BaseBdev2", 00:27:54.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:54.653 "is_configured": false, 00:27:54.653 "data_offset": 0, 00:27:54.653 "data_size": 0 00:27:54.653 } 00:27:54.653 ] 00:27:54.653 }' 00:27:54.653 13:27:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:54.653 13:27:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:55.222 13:27:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:27:55.481 [2024-07-26 13:27:35.845958] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:55.481 [2024-07-26 13:27:35.845991] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa50810 name Existed_Raid, state configuring 00:27:55.481 13:27:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:27:55.742 [2024-07-26 13:27:36.074593] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:55.742 [2024-07-26 13:27:36.075966] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:55.742 [2024-07-26 13:27:36.075997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.742 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:56.001 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:56.001 "name": "Existed_Raid", 00:27:56.001 "uuid": "e0447387-ba47-4d33-8bcb-6067a6f07270", 00:27:56.001 "strip_size_kb": 0, 00:27:56.001 "state": "configuring", 00:27:56.001 "raid_level": "raid1", 00:27:56.001 "superblock": true, 00:27:56.001 "num_base_bdevs": 2, 00:27:56.001 "num_base_bdevs_discovered": 1, 00:27:56.001 "num_base_bdevs_operational": 2, 00:27:56.001 "base_bdevs_list": [ 00:27:56.001 { 00:27:56.001 "name": "BaseBdev1", 00:27:56.001 "uuid": "0c4e9d08-7aea-4b7a-a87e-2ccd0e6e105c", 00:27:56.001 "is_configured": true, 00:27:56.001 "data_offset": 256, 00:27:56.001 "data_size": 7936 00:27:56.001 }, 00:27:56.001 { 00:27:56.001 "name": "BaseBdev2", 00:27:56.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.001 "is_configured": false, 00:27:56.001 "data_offset": 0, 00:27:56.001 "data_size": 0 00:27:56.001 } 00:27:56.001 ] 00:27:56.001 }' 00:27:56.001 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:56.001 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:56.569 13:27:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:27:56.828 [2024-07-26 13:27:37.124625] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:56.828 [2024-07-26 13:27:37.124743] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0xa526c0 00:27:56.828 [2024-07-26 13:27:37.124756] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:27:56.828 [2024-07-26 13:27:37.124810] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa4ff10 00:27:56.828 [2024-07-26 13:27:37.124879] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa526c0 00:27:56.828 [2024-07-26 13:27:37.124888] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xa526c0 00:27:56.828 [2024-07-26 13:27:37.124939] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:56.828 BaseBdev2 00:27:56.828 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:27:56.828 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:27:56.828 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:56.828 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:27:56.828 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:56.828 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:56.828 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:57.087 [ 00:27:57.087 { 00:27:57.087 "name": "BaseBdev2", 00:27:57.087 "aliases": [ 00:27:57.087 "46cfacd5-6960-4396-9545-d6988734d522" 00:27:57.087 ], 00:27:57.087 "product_name": "Malloc disk", 00:27:57.087 "block_size": 4128, 00:27:57.087 "num_blocks": 8192, 00:27:57.087 "uuid": "46cfacd5-6960-4396-9545-d6988734d522", 00:27:57.087 "md_size": 32, 00:27:57.087 "md_interleave": true, 00:27:57.087 "dif_type": 0, 00:27:57.087 "assigned_rate_limits": { 00:27:57.087 "rw_ios_per_sec": 0, 00:27:57.087 "rw_mbytes_per_sec": 0, 00:27:57.087 "r_mbytes_per_sec": 0, 00:27:57.087 "w_mbytes_per_sec": 0 00:27:57.087 }, 00:27:57.087 "claimed": true, 00:27:57.087 "claim_type": "exclusive_write", 00:27:57.087 "zoned": false, 00:27:57.087 "supported_io_types": { 00:27:57.087 "read": true, 00:27:57.087 "write": true, 00:27:57.087 "unmap": true, 00:27:57.087 "flush": true, 00:27:57.087 "reset": true, 00:27:57.087 "nvme_admin": false, 00:27:57.087 "nvme_io": false, 00:27:57.087 "nvme_io_md": false, 00:27:57.087 "write_zeroes": true, 00:27:57.087 "zcopy": true, 00:27:57.087 "get_zone_info": false, 00:27:57.087 "zone_management": false, 00:27:57.087 "zone_append": false, 00:27:57.087 "compare": false, 00:27:57.087 "compare_and_write": false, 00:27:57.087 "abort": true, 00:27:57.087 "seek_hole": false, 00:27:57.087 "seek_data": false, 00:27:57.087 "copy": true, 00:27:57.087 "nvme_iov_md": false 00:27:57.087 }, 00:27:57.087 "memory_domains": [ 00:27:57.087 { 00:27:57.087 "dma_device_id": "system", 00:27:57.087 "dma_device_type": 1 00:27:57.087 }, 00:27:57.087 { 00:27:57.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:57.087 "dma_device_type": 2 00:27:57.087 } 00:27:57.087 ], 00:27:57.087 "driver_specific": {} 00:27:57.087 } 00:27:57.087 ] 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:57.087 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.346 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:57.346 "name": "Existed_Raid", 00:27:57.346 "uuid": "e0447387-ba47-4d33-8bcb-6067a6f07270", 00:27:57.346 "strip_size_kb": 0, 00:27:57.346 "state": "online", 00:27:57.346 "raid_level": "raid1", 00:27:57.346 "superblock": true, 00:27:57.346 "num_base_bdevs": 2, 00:27:57.346 "num_base_bdevs_discovered": 2, 00:27:57.346 "num_base_bdevs_operational": 2, 00:27:57.346 "base_bdevs_list": [ 00:27:57.346 { 00:27:57.346 "name": "BaseBdev1", 00:27:57.346 "uuid": "0c4e9d08-7aea-4b7a-a87e-2ccd0e6e105c", 00:27:57.346 "is_configured": true, 00:27:57.346 "data_offset": 256, 00:27:57.346 "data_size": 7936 00:27:57.346 }, 00:27:57.346 { 00:27:57.346 "name": "BaseBdev2", 00:27:57.346 "uuid": "46cfacd5-6960-4396-9545-d6988734d522", 00:27:57.346 "is_configured": true, 00:27:57.346 "data_offset": 256, 00:27:57.346 "data_size": 7936 00:27:57.346 } 00:27:57.346 ] 00:27:57.346 }' 00:27:57.346 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:57.346 13:27:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:27:57.913 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:27:57.913 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:27:57.913 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:27:57.913 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:27:57.913 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:27:57.913 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:27:57.913 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:27:57.913 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:27:58.173 [2024-07-26 13:27:38.624852] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:58.173 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:27:58.173 "name": "Existed_Raid", 00:27:58.173 "aliases": [ 00:27:58.173 "e0447387-ba47-4d33-8bcb-6067a6f07270" 00:27:58.173 ], 00:27:58.173 "product_name": "Raid Volume", 00:27:58.173 "block_size": 4128, 00:27:58.173 "num_blocks": 7936, 00:27:58.173 "uuid": "e0447387-ba47-4d33-8bcb-6067a6f07270", 00:27:58.173 "md_size": 32, 00:27:58.173 "md_interleave": true, 00:27:58.173 "dif_type": 0, 00:27:58.173 "assigned_rate_limits": { 00:27:58.173 "rw_ios_per_sec": 0, 00:27:58.173 "rw_mbytes_per_sec": 0, 00:27:58.173 "r_mbytes_per_sec": 0, 00:27:58.173 "w_mbytes_per_sec": 0 00:27:58.173 }, 00:27:58.173 "claimed": false, 00:27:58.173 "zoned": false, 00:27:58.173 "supported_io_types": { 00:27:58.173 "read": true, 00:27:58.173 "write": true, 00:27:58.173 "unmap": false, 00:27:58.173 "flush": false, 00:27:58.173 "reset": true, 00:27:58.173 "nvme_admin": false, 00:27:58.173 "nvme_io": false, 00:27:58.173 "nvme_io_md": false, 00:27:58.173 "write_zeroes": true, 00:27:58.173 "zcopy": false, 00:27:58.173 "get_zone_info": false, 00:27:58.173 "zone_management": false, 00:27:58.173 "zone_append": false, 00:27:58.173 "compare": false, 00:27:58.173 "compare_and_write": false, 00:27:58.173 "abort": false, 00:27:58.173 "seek_hole": false, 00:27:58.173 "seek_data": false, 00:27:58.173 "copy": false, 00:27:58.173 "nvme_iov_md": false 00:27:58.173 }, 00:27:58.173 "memory_domains": [ 00:27:58.173 { 00:27:58.173 "dma_device_id": "system", 00:27:58.173 "dma_device_type": 1 00:27:58.173 }, 00:27:58.173 { 00:27:58.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.173 "dma_device_type": 2 00:27:58.173 }, 00:27:58.173 { 00:27:58.173 "dma_device_id": "system", 00:27:58.173 "dma_device_type": 1 00:27:58.173 }, 00:27:58.173 { 00:27:58.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.173 "dma_device_type": 2 00:27:58.173 } 00:27:58.173 ], 00:27:58.173 "driver_specific": { 00:27:58.173 "raid": { 00:27:58.173 "uuid": "e0447387-ba47-4d33-8bcb-6067a6f07270", 00:27:58.173 "strip_size_kb": 0, 00:27:58.173 "state": "online", 00:27:58.173 "raid_level": "raid1", 00:27:58.173 "superblock": true, 00:27:58.173 "num_base_bdevs": 2, 00:27:58.173 "num_base_bdevs_discovered": 2, 00:27:58.173 "num_base_bdevs_operational": 2, 00:27:58.173 "base_bdevs_list": [ 00:27:58.173 { 00:27:58.173 "name": "BaseBdev1", 00:27:58.173 "uuid": "0c4e9d08-7aea-4b7a-a87e-2ccd0e6e105c", 00:27:58.173 "is_configured": true, 00:27:58.173 "data_offset": 256, 00:27:58.173 "data_size": 7936 00:27:58.173 }, 00:27:58.173 { 00:27:58.173 "name": "BaseBdev2", 00:27:58.173 "uuid": "46cfacd5-6960-4396-9545-d6988734d522", 00:27:58.173 "is_configured": true, 00:27:58.173 "data_offset": 256, 00:27:58.173 "data_size": 7936 00:27:58.173 } 00:27:58.173 ] 00:27:58.173 } 00:27:58.173 } 00:27:58.173 }' 00:27:58.173 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:58.173 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:27:58.173 BaseBdev2' 00:27:58.173 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:58.173 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:27:58.173 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:58.432 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:58.432 "name": "BaseBdev1", 00:27:58.432 "aliases": [ 00:27:58.432 "0c4e9d08-7aea-4b7a-a87e-2ccd0e6e105c" 00:27:58.432 ], 00:27:58.432 "product_name": "Malloc disk", 00:27:58.432 "block_size": 4128, 00:27:58.432 "num_blocks": 8192, 00:27:58.432 "uuid": "0c4e9d08-7aea-4b7a-a87e-2ccd0e6e105c", 00:27:58.432 "md_size": 32, 00:27:58.432 "md_interleave": true, 00:27:58.432 "dif_type": 0, 00:27:58.432 "assigned_rate_limits": { 00:27:58.432 "rw_ios_per_sec": 0, 00:27:58.432 "rw_mbytes_per_sec": 0, 00:27:58.432 "r_mbytes_per_sec": 0, 00:27:58.432 "w_mbytes_per_sec": 0 00:27:58.432 }, 00:27:58.432 "claimed": true, 00:27:58.432 "claim_type": "exclusive_write", 00:27:58.432 "zoned": false, 00:27:58.432 "supported_io_types": { 00:27:58.432 "read": true, 00:27:58.432 "write": true, 00:27:58.432 "unmap": true, 00:27:58.432 "flush": true, 00:27:58.432 "reset": true, 00:27:58.432 "nvme_admin": false, 00:27:58.432 "nvme_io": false, 00:27:58.432 "nvme_io_md": false, 00:27:58.432 "write_zeroes": true, 00:27:58.432 "zcopy": true, 00:27:58.432 "get_zone_info": false, 00:27:58.432 "zone_management": false, 00:27:58.432 "zone_append": false, 00:27:58.432 "compare": false, 00:27:58.432 "compare_and_write": false, 00:27:58.432 "abort": true, 00:27:58.432 "seek_hole": false, 00:27:58.432 "seek_data": false, 00:27:58.432 "copy": true, 00:27:58.432 "nvme_iov_md": false 00:27:58.432 }, 00:27:58.432 "memory_domains": [ 00:27:58.432 { 00:27:58.432 "dma_device_id": "system", 00:27:58.432 "dma_device_type": 1 00:27:58.432 }, 00:27:58.432 { 00:27:58.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:58.432 "dma_device_type": 2 00:27:58.432 } 00:27:58.432 ], 00:27:58.432 "driver_specific": {} 00:27:58.432 }' 00:27:58.432 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:58.690 13:27:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:58.690 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:58.690 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:58.690 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:58.690 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:58.690 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:58.690 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:58.690 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:58.690 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:58.949 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:58.949 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:58.949 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:27:58.949 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:27:58.949 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:27:59.208 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:27:59.208 "name": "BaseBdev2", 00:27:59.208 "aliases": [ 00:27:59.208 "46cfacd5-6960-4396-9545-d6988734d522" 00:27:59.208 ], 00:27:59.208 "product_name": "Malloc disk", 00:27:59.208 "block_size": 4128, 00:27:59.208 "num_blocks": 8192, 00:27:59.208 "uuid": "46cfacd5-6960-4396-9545-d6988734d522", 00:27:59.208 "md_size": 32, 00:27:59.208 "md_interleave": true, 00:27:59.208 "dif_type": 0, 00:27:59.208 "assigned_rate_limits": { 00:27:59.208 "rw_ios_per_sec": 0, 00:27:59.208 "rw_mbytes_per_sec": 0, 00:27:59.208 "r_mbytes_per_sec": 0, 00:27:59.208 "w_mbytes_per_sec": 0 00:27:59.208 }, 00:27:59.208 "claimed": true, 00:27:59.208 "claim_type": "exclusive_write", 00:27:59.208 "zoned": false, 00:27:59.208 "supported_io_types": { 00:27:59.208 "read": true, 00:27:59.208 "write": true, 00:27:59.208 "unmap": true, 00:27:59.208 "flush": true, 00:27:59.208 "reset": true, 00:27:59.208 "nvme_admin": false, 00:27:59.208 "nvme_io": false, 00:27:59.208 "nvme_io_md": false, 00:27:59.208 "write_zeroes": true, 00:27:59.208 "zcopy": true, 00:27:59.208 "get_zone_info": false, 00:27:59.208 "zone_management": false, 00:27:59.208 "zone_append": false, 00:27:59.208 "compare": false, 00:27:59.208 "compare_and_write": false, 00:27:59.208 "abort": true, 00:27:59.208 "seek_hole": false, 00:27:59.208 "seek_data": false, 00:27:59.208 "copy": true, 00:27:59.208 "nvme_iov_md": false 00:27:59.208 }, 00:27:59.208 "memory_domains": [ 00:27:59.208 { 00:27:59.208 "dma_device_id": "system", 00:27:59.208 "dma_device_type": 1 00:27:59.208 }, 00:27:59.208 { 00:27:59.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:59.208 "dma_device_type": 2 00:27:59.208 } 00:27:59.208 ], 00:27:59.208 "driver_specific": {} 00:27:59.208 }' 00:27:59.208 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:59.208 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:27:59.208 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:27:59.208 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:59.208 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:27:59.208 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:27:59.208 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:59.467 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:27:59.467 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:27:59.467 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:59.468 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:27:59.468 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:27:59.468 13:27:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:27:59.727 [2024-07-26 13:27:40.088522] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:59.727 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:59.728 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:59.728 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:59.728 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:59.728 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.986 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:59.986 "name": "Existed_Raid", 00:27:59.986 "uuid": "e0447387-ba47-4d33-8bcb-6067a6f07270", 00:27:59.986 "strip_size_kb": 0, 00:27:59.986 "state": "online", 00:27:59.986 "raid_level": "raid1", 00:27:59.986 "superblock": true, 00:27:59.986 "num_base_bdevs": 2, 00:27:59.986 "num_base_bdevs_discovered": 1, 00:27:59.986 "num_base_bdevs_operational": 1, 00:27:59.986 "base_bdevs_list": [ 00:27:59.986 { 00:27:59.986 "name": null, 00:27:59.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.986 "is_configured": false, 00:27:59.986 "data_offset": 256, 00:27:59.986 "data_size": 7936 00:27:59.986 }, 00:27:59.986 { 00:27:59.986 "name": "BaseBdev2", 00:27:59.987 "uuid": "46cfacd5-6960-4396-9545-d6988734d522", 00:27:59.987 "is_configured": true, 00:27:59.987 "data_offset": 256, 00:27:59.987 "data_size": 7936 00:27:59.987 } 00:27:59.987 ] 00:27:59.987 }' 00:27:59.987 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:59.987 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:00.610 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:28:00.610 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:00.610 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:28:00.610 13:27:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.869 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:28:00.869 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:00.869 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:28:00.869 [2024-07-26 13:27:41.348868] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:00.869 [2024-07-26 13:27:41.348947] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:00.869 [2024-07-26 13:27:41.359741] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:00.869 [2024-07-26 13:27:41.359773] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:00.869 [2024-07-26 13:27:41.359783] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa526c0 name Existed_Raid, state offline 00:28:00.869 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:28:00.869 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:28:00.869 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.869 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:28:01.129 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:28:01.129 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:28:01.129 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:28:01.129 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 840428 00:28:01.129 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 840428 ']' 00:28:01.129 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 840428 00:28:01.129 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:28:01.129 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:01.129 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 840428 00:28:01.388 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:01.388 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:01.388 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 840428' 00:28:01.388 killing process with pid 840428 00:28:01.388 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 840428 00:28:01.388 [2024-07-26 13:27:41.671536] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:01.388 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 840428 00:28:01.388 [2024-07-26 13:27:41.672386] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:01.388 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:28:01.388 00:28:01.388 real 0m10.131s 00:28:01.388 user 0m17.876s 00:28:01.388 sys 0m1.980s 00:28:01.388 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:01.388 13:27:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.388 ************************************ 00:28:01.388 END TEST raid_state_function_test_sb_md_interleaved 00:28:01.388 ************************************ 00:28:01.388 13:27:41 bdev_raid -- bdev/bdev_raid.sh@993 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:28:01.388 13:27:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:01.388 13:27:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:01.388 13:27:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:01.648 ************************************ 00:28:01.648 START TEST raid_superblock_test_md_interleaved 00:28:01.648 ************************************ 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@414 -- # local strip_size 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@427 -- # raid_pid=842279 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@428 -- # waitforlisten 842279 /var/tmp/spdk-raid.sock 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 842279 ']' 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:01.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:01.648 13:27:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:01.648 [2024-07-26 13:27:42.003384] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:01.648 [2024-07-26 13:27:42.003439] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842279 ] 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:01.648 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:01.648 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:01.648 [2024-07-26 13:27:42.135556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.908 [2024-07-26 13:27:42.223919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.908 [2024-07-26 13:27:42.277597] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:01.908 [2024-07-26 13:27:42.277622] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:02.477 13:27:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:02.477 13:27:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:28:02.477 13:27:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:28:02.477 13:27:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:02.477 13:27:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:28:02.477 13:27:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:28:02.477 13:27:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:02.477 13:27:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:02.477 13:27:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:02.477 13:27:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:02.477 13:27:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:28:02.736 malloc1 00:28:02.737 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:02.995 [2024-07-26 13:27:43.349249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:02.995 [2024-07-26 13:27:43.349291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:02.995 [2024-07-26 13:27:43.349311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2236310 00:28:02.996 [2024-07-26 13:27:43.349323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:02.996 [2024-07-26 13:27:43.350713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:02.996 [2024-07-26 13:27:43.350739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:02.996 pt1 00:28:02.996 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:02.996 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:02.996 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:28:02.996 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:28:02.996 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:02.996 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:02.996 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:28:02.996 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:02.996 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:28:03.255 malloc2 00:28:03.255 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:03.514 [2024-07-26 13:27:43.811322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:03.514 [2024-07-26 13:27:43.811365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:03.514 [2024-07-26 13:27:43.811381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x222d950 00:28:03.514 [2024-07-26 13:27:43.811393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:03.514 [2024-07-26 13:27:43.812594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:03.514 [2024-07-26 13:27:43.812619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:03.514 pt2 00:28:03.514 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:28:03.514 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:28:03.514 13:27:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:28:03.514 [2024-07-26 13:27:44.023888] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:03.514 [2024-07-26 13:27:44.025005] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:03.514 [2024-07-26 13:27:44.025134] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2236ae0 00:28:03.514 [2024-07-26 13:27:44.025157] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:03.514 [2024-07-26 13:27:44.025228] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2231c00 00:28:03.514 [2024-07-26 13:27:44.025308] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2236ae0 00:28:03.515 [2024-07-26 13:27:44.025317] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2236ae0 00:28:03.515 [2024-07-26 13:27:44.025381] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:03.515 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:03.515 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.774 "name": "raid_bdev1", 00:28:03.774 "uuid": "c082c430-17e1-46aa-995e-de391dcce05e", 00:28:03.774 "strip_size_kb": 0, 00:28:03.774 "state": "online", 00:28:03.774 "raid_level": "raid1", 00:28:03.774 "superblock": true, 00:28:03.774 "num_base_bdevs": 2, 00:28:03.774 "num_base_bdevs_discovered": 2, 00:28:03.774 "num_base_bdevs_operational": 2, 00:28:03.774 "base_bdevs_list": [ 00:28:03.774 { 00:28:03.774 "name": "pt1", 00:28:03.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:03.774 "is_configured": true, 00:28:03.774 "data_offset": 256, 00:28:03.774 "data_size": 7936 00:28:03.774 }, 00:28:03.774 { 00:28:03.774 "name": "pt2", 00:28:03.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:03.774 "is_configured": true, 00:28:03.774 "data_offset": 256, 00:28:03.774 "data_size": 7936 00:28:03.774 } 00:28:03.774 ] 00:28:03.774 }' 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.774 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:04.398 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:28:04.398 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:04.398 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:04.398 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:04.398 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:04.398 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:04.398 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:04.398 13:27:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:04.658 [2024-07-26 13:27:45.050812] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:04.658 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:04.658 "name": "raid_bdev1", 00:28:04.658 "aliases": [ 00:28:04.658 "c082c430-17e1-46aa-995e-de391dcce05e" 00:28:04.658 ], 00:28:04.658 "product_name": "Raid Volume", 00:28:04.658 "block_size": 4128, 00:28:04.658 "num_blocks": 7936, 00:28:04.658 "uuid": "c082c430-17e1-46aa-995e-de391dcce05e", 00:28:04.658 "md_size": 32, 00:28:04.658 "md_interleave": true, 00:28:04.658 "dif_type": 0, 00:28:04.658 "assigned_rate_limits": { 00:28:04.658 "rw_ios_per_sec": 0, 00:28:04.658 "rw_mbytes_per_sec": 0, 00:28:04.658 "r_mbytes_per_sec": 0, 00:28:04.658 "w_mbytes_per_sec": 0 00:28:04.658 }, 00:28:04.658 "claimed": false, 00:28:04.658 "zoned": false, 00:28:04.658 "supported_io_types": { 00:28:04.658 "read": true, 00:28:04.658 "write": true, 00:28:04.658 "unmap": false, 00:28:04.658 "flush": false, 00:28:04.658 "reset": true, 00:28:04.658 "nvme_admin": false, 00:28:04.658 "nvme_io": false, 00:28:04.658 "nvme_io_md": false, 00:28:04.658 "write_zeroes": true, 00:28:04.658 "zcopy": false, 00:28:04.658 "get_zone_info": false, 00:28:04.658 "zone_management": false, 00:28:04.658 "zone_append": false, 00:28:04.658 "compare": false, 00:28:04.658 "compare_and_write": false, 00:28:04.658 "abort": false, 00:28:04.658 "seek_hole": false, 00:28:04.658 "seek_data": false, 00:28:04.658 "copy": false, 00:28:04.658 "nvme_iov_md": false 00:28:04.658 }, 00:28:04.658 "memory_domains": [ 00:28:04.658 { 00:28:04.658 "dma_device_id": "system", 00:28:04.658 "dma_device_type": 1 00:28:04.658 }, 00:28:04.658 { 00:28:04.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.658 "dma_device_type": 2 00:28:04.658 }, 00:28:04.658 { 00:28:04.658 "dma_device_id": "system", 00:28:04.658 "dma_device_type": 1 00:28:04.658 }, 00:28:04.658 { 00:28:04.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.658 "dma_device_type": 2 00:28:04.658 } 00:28:04.658 ], 00:28:04.658 "driver_specific": { 00:28:04.658 "raid": { 00:28:04.658 "uuid": "c082c430-17e1-46aa-995e-de391dcce05e", 00:28:04.658 "strip_size_kb": 0, 00:28:04.658 "state": "online", 00:28:04.658 "raid_level": "raid1", 00:28:04.658 "superblock": true, 00:28:04.658 "num_base_bdevs": 2, 00:28:04.658 "num_base_bdevs_discovered": 2, 00:28:04.658 "num_base_bdevs_operational": 2, 00:28:04.658 "base_bdevs_list": [ 00:28:04.658 { 00:28:04.658 "name": "pt1", 00:28:04.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:04.658 "is_configured": true, 00:28:04.658 "data_offset": 256, 00:28:04.658 "data_size": 7936 00:28:04.658 }, 00:28:04.658 { 00:28:04.658 "name": "pt2", 00:28:04.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:04.658 "is_configured": true, 00:28:04.658 "data_offset": 256, 00:28:04.658 "data_size": 7936 00:28:04.658 } 00:28:04.658 ] 00:28:04.658 } 00:28:04.658 } 00:28:04.658 }' 00:28:04.658 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:04.658 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:04.658 pt2' 00:28:04.658 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:04.658 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:04.658 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:04.918 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:04.918 "name": "pt1", 00:28:04.918 "aliases": [ 00:28:04.918 "00000000-0000-0000-0000-000000000001" 00:28:04.918 ], 00:28:04.918 "product_name": "passthru", 00:28:04.918 "block_size": 4128, 00:28:04.918 "num_blocks": 8192, 00:28:04.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:04.918 "md_size": 32, 00:28:04.918 "md_interleave": true, 00:28:04.918 "dif_type": 0, 00:28:04.918 "assigned_rate_limits": { 00:28:04.918 "rw_ios_per_sec": 0, 00:28:04.918 "rw_mbytes_per_sec": 0, 00:28:04.918 "r_mbytes_per_sec": 0, 00:28:04.918 "w_mbytes_per_sec": 0 00:28:04.918 }, 00:28:04.918 "claimed": true, 00:28:04.918 "claim_type": "exclusive_write", 00:28:04.918 "zoned": false, 00:28:04.918 "supported_io_types": { 00:28:04.918 "read": true, 00:28:04.918 "write": true, 00:28:04.918 "unmap": true, 00:28:04.918 "flush": true, 00:28:04.918 "reset": true, 00:28:04.918 "nvme_admin": false, 00:28:04.918 "nvme_io": false, 00:28:04.918 "nvme_io_md": false, 00:28:04.918 "write_zeroes": true, 00:28:04.918 "zcopy": true, 00:28:04.918 "get_zone_info": false, 00:28:04.918 "zone_management": false, 00:28:04.918 "zone_append": false, 00:28:04.918 "compare": false, 00:28:04.918 "compare_and_write": false, 00:28:04.918 "abort": true, 00:28:04.918 "seek_hole": false, 00:28:04.918 "seek_data": false, 00:28:04.918 "copy": true, 00:28:04.918 "nvme_iov_md": false 00:28:04.918 }, 00:28:04.918 "memory_domains": [ 00:28:04.918 { 00:28:04.918 "dma_device_id": "system", 00:28:04.918 "dma_device_type": 1 00:28:04.918 }, 00:28:04.918 { 00:28:04.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.918 "dma_device_type": 2 00:28:04.918 } 00:28:04.918 ], 00:28:04.918 "driver_specific": { 00:28:04.918 "passthru": { 00:28:04.918 "name": "pt1", 00:28:04.918 "base_bdev_name": "malloc1" 00:28:04.918 } 00:28:04.918 } 00:28:04.918 }' 00:28:04.918 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:04.918 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:04.918 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:04.918 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:05.177 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:05.177 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:05.177 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:05.177 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:05.177 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:05.177 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:05.177 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:05.177 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:05.177 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:05.177 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:05.177 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:05.437 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:05.437 "name": "pt2", 00:28:05.437 "aliases": [ 00:28:05.437 "00000000-0000-0000-0000-000000000002" 00:28:05.437 ], 00:28:05.437 "product_name": "passthru", 00:28:05.437 "block_size": 4128, 00:28:05.437 "num_blocks": 8192, 00:28:05.437 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:05.437 "md_size": 32, 00:28:05.437 "md_interleave": true, 00:28:05.437 "dif_type": 0, 00:28:05.437 "assigned_rate_limits": { 00:28:05.437 "rw_ios_per_sec": 0, 00:28:05.437 "rw_mbytes_per_sec": 0, 00:28:05.437 "r_mbytes_per_sec": 0, 00:28:05.437 "w_mbytes_per_sec": 0 00:28:05.437 }, 00:28:05.437 "claimed": true, 00:28:05.437 "claim_type": "exclusive_write", 00:28:05.437 "zoned": false, 00:28:05.437 "supported_io_types": { 00:28:05.437 "read": true, 00:28:05.437 "write": true, 00:28:05.437 "unmap": true, 00:28:05.437 "flush": true, 00:28:05.437 "reset": true, 00:28:05.437 "nvme_admin": false, 00:28:05.437 "nvme_io": false, 00:28:05.437 "nvme_io_md": false, 00:28:05.437 "write_zeroes": true, 00:28:05.437 "zcopy": true, 00:28:05.437 "get_zone_info": false, 00:28:05.437 "zone_management": false, 00:28:05.437 "zone_append": false, 00:28:05.437 "compare": false, 00:28:05.437 "compare_and_write": false, 00:28:05.437 "abort": true, 00:28:05.437 "seek_hole": false, 00:28:05.437 "seek_data": false, 00:28:05.437 "copy": true, 00:28:05.437 "nvme_iov_md": false 00:28:05.437 }, 00:28:05.437 "memory_domains": [ 00:28:05.437 { 00:28:05.437 "dma_device_id": "system", 00:28:05.437 "dma_device_type": 1 00:28:05.437 }, 00:28:05.437 { 00:28:05.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.437 "dma_device_type": 2 00:28:05.437 } 00:28:05.437 ], 00:28:05.437 "driver_specific": { 00:28:05.437 "passthru": { 00:28:05.437 "name": "pt2", 00:28:05.437 "base_bdev_name": "malloc2" 00:28:05.437 } 00:28:05.437 } 00:28:05.437 }' 00:28:05.437 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:05.696 13:27:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:05.696 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:05.696 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:05.696 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:05.696 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:05.696 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:05.696 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:05.696 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:05.696 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:05.955 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:05.955 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:05.955 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:28:05.955 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:06.214 [2024-07-26 13:27:46.482587] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:06.214 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=c082c430-17e1-46aa-995e-de391dcce05e 00:28:06.214 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' -z c082c430-17e1-46aa-995e-de391dcce05e ']' 00:28:06.214 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:06.214 [2024-07-26 13:27:46.710955] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:06.214 [2024-07-26 13:27:46.710971] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:06.214 [2024-07-26 13:27:46.711020] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:06.214 [2024-07-26 13:27:46.711068] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:06.214 [2024-07-26 13:27:46.711078] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2236ae0 name raid_bdev1, state offline 00:28:06.214 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.214 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:28:06.474 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:28:06.474 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:28:06.474 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:06.474 13:27:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:06.733 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:28:06.733 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:06.992 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:28:06.992 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:07.252 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:28:07.252 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:07.252 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:28:07.252 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:07.252 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:07.252 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.252 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:07.252 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.252 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:07.252 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.252 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:07.252 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:07.253 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:28:07.512 [2024-07-26 13:27:47.857930] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:07.512 [2024-07-26 13:27:47.859203] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:07.512 [2024-07-26 13:27:47.859254] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:07.512 [2024-07-26 13:27:47.859289] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:07.512 [2024-07-26 13:27:47.859306] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:07.512 [2024-07-26 13:27:47.859315] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2230600 name raid_bdev1, state configuring 00:28:07.512 request: 00:28:07.512 { 00:28:07.512 "name": "raid_bdev1", 00:28:07.512 "raid_level": "raid1", 00:28:07.512 "base_bdevs": [ 00:28:07.512 "malloc1", 00:28:07.512 "malloc2" 00:28:07.512 ], 00:28:07.512 "superblock": false, 00:28:07.512 "method": "bdev_raid_create", 00:28:07.512 "req_id": 1 00:28:07.512 } 00:28:07.512 Got JSON-RPC error response 00:28:07.512 response: 00:28:07.512 { 00:28:07.512 "code": -17, 00:28:07.512 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:07.512 } 00:28:07.512 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:28:07.512 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:07.512 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:07.512 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:07.512 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.512 13:27:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:28:07.772 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:28:07.772 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:28:07.772 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:08.031 [2024-07-26 13:27:48.315084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:08.031 [2024-07-26 13:27:48.315124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.031 [2024-07-26 13:27:48.315144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2231df0 00:28:08.031 [2024-07-26 13:27:48.315156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.031 [2024-07-26 13:27:48.316460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.031 [2024-07-26 13:27:48.316484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:08.031 [2024-07-26 13:27:48.316529] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:08.031 [2024-07-26 13:27:48.316554] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:08.031 pt1 00:28:08.031 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:28:08.032 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:08.032 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:28:08.032 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:08.032 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:08.032 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:08.032 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:08.032 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:08.032 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:08.032 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:08.032 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.032 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.291 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:08.291 "name": "raid_bdev1", 00:28:08.291 "uuid": "c082c430-17e1-46aa-995e-de391dcce05e", 00:28:08.291 "strip_size_kb": 0, 00:28:08.291 "state": "configuring", 00:28:08.291 "raid_level": "raid1", 00:28:08.291 "superblock": true, 00:28:08.291 "num_base_bdevs": 2, 00:28:08.291 "num_base_bdevs_discovered": 1, 00:28:08.291 "num_base_bdevs_operational": 2, 00:28:08.291 "base_bdevs_list": [ 00:28:08.291 { 00:28:08.291 "name": "pt1", 00:28:08.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:08.291 "is_configured": true, 00:28:08.291 "data_offset": 256, 00:28:08.291 "data_size": 7936 00:28:08.291 }, 00:28:08.291 { 00:28:08.291 "name": null, 00:28:08.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:08.291 "is_configured": false, 00:28:08.291 "data_offset": 256, 00:28:08.291 "data_size": 7936 00:28:08.291 } 00:28:08.291 ] 00:28:08.291 }' 00:28:08.291 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:08.291 13:27:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:08.860 [2024-07-26 13:27:49.341844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:08.860 [2024-07-26 13:27:49.341889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.860 [2024-07-26 13:27:49.341905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x222fe40 00:28:08.860 [2024-07-26 13:27:49.341917] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.860 [2024-07-26 13:27:49.342062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.860 [2024-07-26 13:27:49.342076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:08.860 [2024-07-26 13:27:49.342116] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:08.860 [2024-07-26 13:27:49.342132] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:08.860 [2024-07-26 13:27:49.342224] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x222f840 00:28:08.860 [2024-07-26 13:27:49.342234] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:08.860 [2024-07-26 13:27:49.342280] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20994f0 00:28:08.860 [2024-07-26 13:27:49.342348] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x222f840 00:28:08.860 [2024-07-26 13:27:49.342356] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x222f840 00:28:08.860 [2024-07-26 13:27:49.342407] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:08.860 pt2 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.860 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.120 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:09.120 "name": "raid_bdev1", 00:28:09.120 "uuid": "c082c430-17e1-46aa-995e-de391dcce05e", 00:28:09.120 "strip_size_kb": 0, 00:28:09.120 "state": "online", 00:28:09.120 "raid_level": "raid1", 00:28:09.120 "superblock": true, 00:28:09.120 "num_base_bdevs": 2, 00:28:09.120 "num_base_bdevs_discovered": 2, 00:28:09.120 "num_base_bdevs_operational": 2, 00:28:09.120 "base_bdevs_list": [ 00:28:09.120 { 00:28:09.120 "name": "pt1", 00:28:09.120 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:09.120 "is_configured": true, 00:28:09.120 "data_offset": 256, 00:28:09.120 "data_size": 7936 00:28:09.120 }, 00:28:09.120 { 00:28:09.120 "name": "pt2", 00:28:09.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:09.120 "is_configured": true, 00:28:09.120 "data_offset": 256, 00:28:09.120 "data_size": 7936 00:28:09.120 } 00:28:09.120 ] 00:28:09.120 }' 00:28:09.120 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:09.120 13:27:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:09.688 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:28:09.688 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:28:09.688 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:28:09.688 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:28:09.688 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:28:09.688 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:28:09.688 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:09.688 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:28:09.947 [2024-07-26 13:27:50.364773] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:09.947 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:28:09.947 "name": "raid_bdev1", 00:28:09.947 "aliases": [ 00:28:09.947 "c082c430-17e1-46aa-995e-de391dcce05e" 00:28:09.947 ], 00:28:09.947 "product_name": "Raid Volume", 00:28:09.947 "block_size": 4128, 00:28:09.947 "num_blocks": 7936, 00:28:09.947 "uuid": "c082c430-17e1-46aa-995e-de391dcce05e", 00:28:09.947 "md_size": 32, 00:28:09.947 "md_interleave": true, 00:28:09.947 "dif_type": 0, 00:28:09.947 "assigned_rate_limits": { 00:28:09.947 "rw_ios_per_sec": 0, 00:28:09.947 "rw_mbytes_per_sec": 0, 00:28:09.947 "r_mbytes_per_sec": 0, 00:28:09.947 "w_mbytes_per_sec": 0 00:28:09.947 }, 00:28:09.947 "claimed": false, 00:28:09.947 "zoned": false, 00:28:09.947 "supported_io_types": { 00:28:09.947 "read": true, 00:28:09.947 "write": true, 00:28:09.947 "unmap": false, 00:28:09.947 "flush": false, 00:28:09.947 "reset": true, 00:28:09.947 "nvme_admin": false, 00:28:09.947 "nvme_io": false, 00:28:09.947 "nvme_io_md": false, 00:28:09.947 "write_zeroes": true, 00:28:09.947 "zcopy": false, 00:28:09.947 "get_zone_info": false, 00:28:09.947 "zone_management": false, 00:28:09.947 "zone_append": false, 00:28:09.947 "compare": false, 00:28:09.947 "compare_and_write": false, 00:28:09.947 "abort": false, 00:28:09.947 "seek_hole": false, 00:28:09.947 "seek_data": false, 00:28:09.947 "copy": false, 00:28:09.947 "nvme_iov_md": false 00:28:09.947 }, 00:28:09.947 "memory_domains": [ 00:28:09.947 { 00:28:09.947 "dma_device_id": "system", 00:28:09.947 "dma_device_type": 1 00:28:09.947 }, 00:28:09.947 { 00:28:09.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.947 "dma_device_type": 2 00:28:09.947 }, 00:28:09.947 { 00:28:09.947 "dma_device_id": "system", 00:28:09.947 "dma_device_type": 1 00:28:09.947 }, 00:28:09.947 { 00:28:09.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.947 "dma_device_type": 2 00:28:09.947 } 00:28:09.947 ], 00:28:09.947 "driver_specific": { 00:28:09.947 "raid": { 00:28:09.947 "uuid": "c082c430-17e1-46aa-995e-de391dcce05e", 00:28:09.947 "strip_size_kb": 0, 00:28:09.947 "state": "online", 00:28:09.947 "raid_level": "raid1", 00:28:09.947 "superblock": true, 00:28:09.947 "num_base_bdevs": 2, 00:28:09.947 "num_base_bdevs_discovered": 2, 00:28:09.947 "num_base_bdevs_operational": 2, 00:28:09.947 "base_bdevs_list": [ 00:28:09.947 { 00:28:09.947 "name": "pt1", 00:28:09.947 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:09.947 "is_configured": true, 00:28:09.947 "data_offset": 256, 00:28:09.947 "data_size": 7936 00:28:09.947 }, 00:28:09.947 { 00:28:09.947 "name": "pt2", 00:28:09.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:09.947 "is_configured": true, 00:28:09.947 "data_offset": 256, 00:28:09.947 "data_size": 7936 00:28:09.947 } 00:28:09.947 ] 00:28:09.947 } 00:28:09.947 } 00:28:09.947 }' 00:28:09.947 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:09.947 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:28:09.947 pt2' 00:28:09.947 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:09.947 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:28:09.947 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:10.207 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:10.207 "name": "pt1", 00:28:10.207 "aliases": [ 00:28:10.207 "00000000-0000-0000-0000-000000000001" 00:28:10.207 ], 00:28:10.207 "product_name": "passthru", 00:28:10.207 "block_size": 4128, 00:28:10.207 "num_blocks": 8192, 00:28:10.207 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:10.207 "md_size": 32, 00:28:10.207 "md_interleave": true, 00:28:10.207 "dif_type": 0, 00:28:10.207 "assigned_rate_limits": { 00:28:10.207 "rw_ios_per_sec": 0, 00:28:10.207 "rw_mbytes_per_sec": 0, 00:28:10.207 "r_mbytes_per_sec": 0, 00:28:10.207 "w_mbytes_per_sec": 0 00:28:10.207 }, 00:28:10.207 "claimed": true, 00:28:10.207 "claim_type": "exclusive_write", 00:28:10.207 "zoned": false, 00:28:10.207 "supported_io_types": { 00:28:10.207 "read": true, 00:28:10.207 "write": true, 00:28:10.207 "unmap": true, 00:28:10.207 "flush": true, 00:28:10.207 "reset": true, 00:28:10.207 "nvme_admin": false, 00:28:10.207 "nvme_io": false, 00:28:10.207 "nvme_io_md": false, 00:28:10.207 "write_zeroes": true, 00:28:10.207 "zcopy": true, 00:28:10.207 "get_zone_info": false, 00:28:10.207 "zone_management": false, 00:28:10.207 "zone_append": false, 00:28:10.207 "compare": false, 00:28:10.207 "compare_and_write": false, 00:28:10.207 "abort": true, 00:28:10.207 "seek_hole": false, 00:28:10.207 "seek_data": false, 00:28:10.207 "copy": true, 00:28:10.207 "nvme_iov_md": false 00:28:10.207 }, 00:28:10.207 "memory_domains": [ 00:28:10.207 { 00:28:10.207 "dma_device_id": "system", 00:28:10.207 "dma_device_type": 1 00:28:10.207 }, 00:28:10.207 { 00:28:10.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.207 "dma_device_type": 2 00:28:10.207 } 00:28:10.207 ], 00:28:10.207 "driver_specific": { 00:28:10.207 "passthru": { 00:28:10.207 "name": "pt1", 00:28:10.207 "base_bdev_name": "malloc1" 00:28:10.207 } 00:28:10.207 } 00:28:10.207 }' 00:28:10.207 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.207 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.466 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:10.466 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.466 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.466 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:10.466 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.466 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.466 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:10.466 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.466 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:10.725 13:27:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:10.725 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:28:10.725 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:28:10.725 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:28:10.725 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:28:10.725 "name": "pt2", 00:28:10.725 "aliases": [ 00:28:10.725 "00000000-0000-0000-0000-000000000002" 00:28:10.725 ], 00:28:10.725 "product_name": "passthru", 00:28:10.725 "block_size": 4128, 00:28:10.725 "num_blocks": 8192, 00:28:10.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:10.725 "md_size": 32, 00:28:10.725 "md_interleave": true, 00:28:10.725 "dif_type": 0, 00:28:10.725 "assigned_rate_limits": { 00:28:10.725 "rw_ios_per_sec": 0, 00:28:10.725 "rw_mbytes_per_sec": 0, 00:28:10.725 "r_mbytes_per_sec": 0, 00:28:10.725 "w_mbytes_per_sec": 0 00:28:10.725 }, 00:28:10.725 "claimed": true, 00:28:10.725 "claim_type": "exclusive_write", 00:28:10.725 "zoned": false, 00:28:10.725 "supported_io_types": { 00:28:10.725 "read": true, 00:28:10.725 "write": true, 00:28:10.725 "unmap": true, 00:28:10.725 "flush": true, 00:28:10.725 "reset": true, 00:28:10.725 "nvme_admin": false, 00:28:10.725 "nvme_io": false, 00:28:10.725 "nvme_io_md": false, 00:28:10.725 "write_zeroes": true, 00:28:10.725 "zcopy": true, 00:28:10.725 "get_zone_info": false, 00:28:10.725 "zone_management": false, 00:28:10.725 "zone_append": false, 00:28:10.725 "compare": false, 00:28:10.725 "compare_and_write": false, 00:28:10.725 "abort": true, 00:28:10.725 "seek_hole": false, 00:28:10.725 "seek_data": false, 00:28:10.725 "copy": true, 00:28:10.725 "nvme_iov_md": false 00:28:10.725 }, 00:28:10.725 "memory_domains": [ 00:28:10.725 { 00:28:10.725 "dma_device_id": "system", 00:28:10.725 "dma_device_type": 1 00:28:10.725 }, 00:28:10.725 { 00:28:10.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:10.725 "dma_device_type": 2 00:28:10.725 } 00:28:10.725 ], 00:28:10.725 "driver_specific": { 00:28:10.725 "passthru": { 00:28:10.725 "name": "pt2", 00:28:10.725 "base_bdev_name": "malloc2" 00:28:10.725 } 00:28:10.725 } 00:28:10.725 }' 00:28:10.725 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.984 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:28:10.984 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:28:10.984 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.984 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:28:10.984 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:28:10.984 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.984 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:28:10.984 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:28:10.984 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:11.243 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:28:11.243 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:28:11.243 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:11.243 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:28:11.243 [2024-07-26 13:27:51.768495] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:11.502 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # '[' c082c430-17e1-46aa-995e-de391dcce05e '!=' c082c430-17e1-46aa-995e-de391dcce05e ']' 00:28:11.503 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:28:11.503 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:28:11.503 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:28:11.503 13:27:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:28:11.503 [2024-07-26 13:27:51.996887] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:28:11.503 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:11.503 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:11.503 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:11.503 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:11.503 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:11.503 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:11.503 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:11.503 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:11.503 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:11.503 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:11.762 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:11.762 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:11.762 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:11.762 "name": "raid_bdev1", 00:28:11.762 "uuid": "c082c430-17e1-46aa-995e-de391dcce05e", 00:28:11.762 "strip_size_kb": 0, 00:28:11.762 "state": "online", 00:28:11.762 "raid_level": "raid1", 00:28:11.762 "superblock": true, 00:28:11.762 "num_base_bdevs": 2, 00:28:11.762 "num_base_bdevs_discovered": 1, 00:28:11.762 "num_base_bdevs_operational": 1, 00:28:11.762 "base_bdevs_list": [ 00:28:11.762 { 00:28:11.762 "name": null, 00:28:11.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:11.762 "is_configured": false, 00:28:11.762 "data_offset": 256, 00:28:11.762 "data_size": 7936 00:28:11.762 }, 00:28:11.762 { 00:28:11.762 "name": "pt2", 00:28:11.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:11.762 "is_configured": true, 00:28:11.762 "data_offset": 256, 00:28:11.762 "data_size": 7936 00:28:11.762 } 00:28:11.762 ] 00:28:11.762 }' 00:28:11.762 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:11.762 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:12.331 13:27:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:12.590 [2024-07-26 13:27:53.055652] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:12.590 [2024-07-26 13:27:53.055675] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:12.590 [2024-07-26 13:27:53.055721] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:12.590 [2024-07-26 13:27:53.055758] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:12.590 [2024-07-26 13:27:53.055769] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x222f840 name raid_bdev1, state offline 00:28:12.590 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.590 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:28:12.849 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:28:12.849 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:28:12.849 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:28:12.849 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:28:12.849 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:28:13.108 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:28:13.108 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:28:13.108 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:28:13.108 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:28:13.108 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@534 -- # i=1 00:28:13.108 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:13.367 [2024-07-26 13:27:53.737408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:13.367 [2024-07-26 13:27:53.737443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:13.367 [2024-07-26 13:27:53.737466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2099100 00:28:13.367 [2024-07-26 13:27:53.737478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:13.367 [2024-07-26 13:27:53.738791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:13.367 [2024-07-26 13:27:53.738815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:13.367 [2024-07-26 13:27:53.738856] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:13.367 [2024-07-26 13:27:53.738878] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:13.367 [2024-07-26 13:27:53.738940] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x20997a0 00:28:13.367 [2024-07-26 13:27:53.738949] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:13.367 [2024-07-26 13:27:53.739001] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x209a850 00:28:13.367 [2024-07-26 13:27:53.739067] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20997a0 00:28:13.367 [2024-07-26 13:27:53.739076] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20997a0 00:28:13.367 [2024-07-26 13:27:53.739125] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:13.367 pt2 00:28:13.367 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:13.367 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:13.367 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:13.367 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:13.367 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:13.367 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:13.367 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:13.367 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:13.367 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:13.367 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:13.367 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.367 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.626 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:13.626 "name": "raid_bdev1", 00:28:13.626 "uuid": "c082c430-17e1-46aa-995e-de391dcce05e", 00:28:13.626 "strip_size_kb": 0, 00:28:13.626 "state": "online", 00:28:13.626 "raid_level": "raid1", 00:28:13.626 "superblock": true, 00:28:13.626 "num_base_bdevs": 2, 00:28:13.626 "num_base_bdevs_discovered": 1, 00:28:13.626 "num_base_bdevs_operational": 1, 00:28:13.626 "base_bdevs_list": [ 00:28:13.626 { 00:28:13.626 "name": null, 00:28:13.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.626 "is_configured": false, 00:28:13.626 "data_offset": 256, 00:28:13.626 "data_size": 7936 00:28:13.626 }, 00:28:13.626 { 00:28:13.626 "name": "pt2", 00:28:13.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:13.626 "is_configured": true, 00:28:13.626 "data_offset": 256, 00:28:13.626 "data_size": 7936 00:28:13.626 } 00:28:13.626 ] 00:28:13.626 }' 00:28:13.626 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:13.626 13:27:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:14.193 13:27:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:14.452 [2024-07-26 13:27:54.732018] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:14.452 [2024-07-26 13:27:54.732044] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:14.452 [2024-07-26 13:27:54.732089] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:14.452 [2024-07-26 13:27:54.732125] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:14.452 [2024-07-26 13:27:54.732135] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20997a0 name raid_bdev1, state offline 00:28:14.452 13:27:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.452 13:27:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:28:14.452 13:27:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:28:14.452 13:27:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:28:14.452 13:27:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:28:14.712 13:27:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:14.712 [2024-07-26 13:27:55.185203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:14.712 [2024-07-26 13:27:55.185243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.712 [2024-07-26 13:27:55.185258] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2230600 00:28:14.712 [2024-07-26 13:27:55.185270] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.712 [2024-07-26 13:27:55.186567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.712 [2024-07-26 13:27:55.186591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:14.712 [2024-07-26 13:27:55.186630] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:14.712 [2024-07-26 13:27:55.186653] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:14.712 [2024-07-26 13:27:55.186725] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:28:14.712 [2024-07-26 13:27:55.186736] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:14.712 [2024-07-26 13:27:55.186750] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x222e080 name raid_bdev1, state configuring 00:28:14.712 [2024-07-26 13:27:55.186769] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:14.712 [2024-07-26 13:27:55.186820] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x2099600 00:28:14.712 [2024-07-26 13:27:55.186830] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:14.712 [2024-07-26 13:27:55.186880] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x222fcb0 00:28:14.712 [2024-07-26 13:27:55.186944] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2099600 00:28:14.712 [2024-07-26 13:27:55.186953] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2099600 00:28:14.712 [2024-07-26 13:27:55.187006] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:14.712 pt1 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.712 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.971 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:14.971 "name": "raid_bdev1", 00:28:14.971 "uuid": "c082c430-17e1-46aa-995e-de391dcce05e", 00:28:14.971 "strip_size_kb": 0, 00:28:14.971 "state": "online", 00:28:14.971 "raid_level": "raid1", 00:28:14.971 "superblock": true, 00:28:14.971 "num_base_bdevs": 2, 00:28:14.971 "num_base_bdevs_discovered": 1, 00:28:14.971 "num_base_bdevs_operational": 1, 00:28:14.971 "base_bdevs_list": [ 00:28:14.971 { 00:28:14.971 "name": null, 00:28:14.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.971 "is_configured": false, 00:28:14.971 "data_offset": 256, 00:28:14.971 "data_size": 7936 00:28:14.971 }, 00:28:14.971 { 00:28:14.971 "name": "pt2", 00:28:14.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:14.971 "is_configured": true, 00:28:14.971 "data_offset": 256, 00:28:14.971 "data_size": 7936 00:28:14.971 } 00:28:14.971 ] 00:28:14.971 }' 00:28:14.971 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:14.971 13:27:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:15.540 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:28:15.540 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:28:15.800 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:28:15.800 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:15.800 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:28:16.059 [2024-07-26 13:27:56.440725] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:16.059 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # '[' c082c430-17e1-46aa-995e-de391dcce05e '!=' c082c430-17e1-46aa-995e-de391dcce05e ']' 00:28:16.059 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@578 -- # killprocess 842279 00:28:16.059 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 842279 ']' 00:28:16.059 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 842279 00:28:16.059 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:28:16.059 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:16.059 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 842279 00:28:16.059 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:16.059 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:16.059 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 842279' 00:28:16.059 killing process with pid 842279 00:28:16.059 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 842279 00:28:16.059 [2024-07-26 13:27:56.514580] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:16.059 [2024-07-26 13:27:56.514632] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:16.059 [2024-07-26 13:27:56.514671] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:16.059 [2024-07-26 13:27:56.514681] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2099600 name raid_bdev1, state offline 00:28:16.059 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 842279 00:28:16.059 [2024-07-26 13:27:56.530697] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:16.319 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@580 -- # return 0 00:28:16.319 00:28:16.319 real 0m14.777s 00:28:16.319 user 0m26.711s 00:28:16.319 sys 0m2.784s 00:28:16.319 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:16.319 13:27:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.319 ************************************ 00:28:16.319 END TEST raid_superblock_test_md_interleaved 00:28:16.319 ************************************ 00:28:16.319 13:27:56 bdev_raid -- bdev/bdev_raid.sh@994 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:28:16.319 13:27:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:16.319 13:27:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:16.320 13:27:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:16.320 ************************************ 00:28:16.320 START TEST raid_rebuild_test_sb_md_interleaved 00:28:16.320 ************************************ 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # local verify=false 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # local strip_size 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # local create_arg 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # local data_offset 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # raid_pid=844975 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # waitforlisten 844975 /var/tmp/spdk-raid.sock 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 844975 ']' 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:16.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:16.320 13:27:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:16.580 [2024-07-26 13:27:56.879604] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:16.580 [2024-07-26 13:27:56.879664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844975 ] 00:28:16.580 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:16.580 Zero copy mechanism will not be used. 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:16.580 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:16.580 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:16.580 [2024-07-26 13:27:57.010695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.580 [2024-07-26 13:27:57.098545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.839 [2024-07-26 13:27:57.159236] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:16.839 [2024-07-26 13:27:57.159266] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:17.407 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:17.407 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:28:17.407 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:17.407 13:27:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:28:17.667 BaseBdev1_malloc 00:28:17.667 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:17.926 [2024-07-26 13:27:58.231286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:17.926 [2024-07-26 13:27:58.231328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:17.926 [2024-07-26 13:27:58.231349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17f1610 00:28:17.926 [2024-07-26 13:27:58.231360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:17.926 [2024-07-26 13:27:58.232770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:17.926 [2024-07-26 13:27:58.232796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:17.926 BaseBdev1 00:28:17.926 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:17.926 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:28:18.200 BaseBdev2_malloc 00:28:18.200 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:18.200 [2024-07-26 13:27:58.689295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:18.200 [2024-07-26 13:27:58.689341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:18.200 [2024-07-26 13:27:58.689358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17e8cc0 00:28:18.200 [2024-07-26 13:27:58.689370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:18.200 [2024-07-26 13:27:58.690603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:18.200 [2024-07-26 13:27:58.690627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:18.200 BaseBdev2 00:28:18.200 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:28:18.526 spare_malloc 00:28:18.526 13:27:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:18.786 spare_delay 00:28:18.786 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:19.045 [2024-07-26 13:27:59.343619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:19.045 [2024-07-26 13:27:59.343658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:19.045 [2024-07-26 13:27:59.343675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17e98e0 00:28:19.045 [2024-07-26 13:27:59.343686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:19.045 [2024-07-26 13:27:59.344913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:19.045 [2024-07-26 13:27:59.344937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:19.045 spare 00:28:19.045 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:19.305 [2024-07-26 13:27:59.572255] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:19.305 [2024-07-26 13:27:59.573449] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:19.305 [2024-07-26 13:27:59.573584] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x17ec2b0 00:28:19.305 [2024-07-26 13:27:59.573595] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:19.305 [2024-07-26 13:27:59.573668] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17eeec0 00:28:19.305 [2024-07-26 13:27:59.573739] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17ec2b0 00:28:19.305 [2024-07-26 13:27:59.573748] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x17ec2b0 00:28:19.305 [2024-07-26 13:27:59.573810] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:19.305 "name": "raid_bdev1", 00:28:19.305 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:19.305 "strip_size_kb": 0, 00:28:19.305 "state": "online", 00:28:19.305 "raid_level": "raid1", 00:28:19.305 "superblock": true, 00:28:19.305 "num_base_bdevs": 2, 00:28:19.305 "num_base_bdevs_discovered": 2, 00:28:19.305 "num_base_bdevs_operational": 2, 00:28:19.305 "base_bdevs_list": [ 00:28:19.305 { 00:28:19.305 "name": "BaseBdev1", 00:28:19.305 "uuid": "ba143aa7-24c7-542a-a3ca-edac0ef152ed", 00:28:19.305 "is_configured": true, 00:28:19.305 "data_offset": 256, 00:28:19.305 "data_size": 7936 00:28:19.305 }, 00:28:19.305 { 00:28:19.305 "name": "BaseBdev2", 00:28:19.305 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:19.305 "is_configured": true, 00:28:19.305 "data_offset": 256, 00:28:19.305 "data_size": 7936 00:28:19.305 } 00:28:19.305 ] 00:28:19.305 }' 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:19.305 13:27:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:20.244 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:20.244 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:28:20.244 [2024-07-26 13:28:00.611189] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:20.244 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:28:20.244 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.244 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:20.503 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:28:20.503 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:28:20.503 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # '[' false = true ']' 00:28:20.503 13:28:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:20.763 [2024-07-26 13:28:01.068163] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:20.763 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:20.763 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:20.763 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:20.763 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:20.763 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:20.763 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:20.763 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:20.763 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:20.763 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:20.763 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:20.763 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.763 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.023 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:21.023 "name": "raid_bdev1", 00:28:21.023 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:21.023 "strip_size_kb": 0, 00:28:21.023 "state": "online", 00:28:21.023 "raid_level": "raid1", 00:28:21.023 "superblock": true, 00:28:21.023 "num_base_bdevs": 2, 00:28:21.023 "num_base_bdevs_discovered": 1, 00:28:21.023 "num_base_bdevs_operational": 1, 00:28:21.023 "base_bdevs_list": [ 00:28:21.023 { 00:28:21.023 "name": null, 00:28:21.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.023 "is_configured": false, 00:28:21.023 "data_offset": 256, 00:28:21.023 "data_size": 7936 00:28:21.023 }, 00:28:21.023 { 00:28:21.023 "name": "BaseBdev2", 00:28:21.023 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:21.023 "is_configured": true, 00:28:21.023 "data_offset": 256, 00:28:21.023 "data_size": 7936 00:28:21.023 } 00:28:21.023 ] 00:28:21.023 }' 00:28:21.023 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:21.023 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:21.591 13:28:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:21.591 [2024-07-26 13:28:02.086857] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:21.591 [2024-07-26 13:28:02.090308] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17eedd0 00:28:21.591 [2024-07-26 13:28:02.092416] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:21.591 13:28:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:22.967 "name": "raid_bdev1", 00:28:22.967 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:22.967 "strip_size_kb": 0, 00:28:22.967 "state": "online", 00:28:22.967 "raid_level": "raid1", 00:28:22.967 "superblock": true, 00:28:22.967 "num_base_bdevs": 2, 00:28:22.967 "num_base_bdevs_discovered": 2, 00:28:22.967 "num_base_bdevs_operational": 2, 00:28:22.967 "process": { 00:28:22.967 "type": "rebuild", 00:28:22.967 "target": "spare", 00:28:22.967 "progress": { 00:28:22.967 "blocks": 2816, 00:28:22.967 "percent": 35 00:28:22.967 } 00:28:22.967 }, 00:28:22.967 "base_bdevs_list": [ 00:28:22.967 { 00:28:22.967 "name": "spare", 00:28:22.967 "uuid": "e9d1338a-0b78-5887-8a49-8ecbf63f6185", 00:28:22.967 "is_configured": true, 00:28:22.967 "data_offset": 256, 00:28:22.967 "data_size": 7936 00:28:22.967 }, 00:28:22.967 { 00:28:22.967 "name": "BaseBdev2", 00:28:22.967 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:22.967 "is_configured": true, 00:28:22.967 "data_offset": 256, 00:28:22.967 "data_size": 7936 00:28:22.967 } 00:28:22.967 ] 00:28:22.967 }' 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:22.967 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:23.228 [2024-07-26 13:28:03.577247] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:23.228 [2024-07-26 13:28:03.603444] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:23.228 [2024-07-26 13:28:03.603484] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.228 [2024-07-26 13:28:03.603498] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:23.228 [2024-07-26 13:28:03.603506] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:23.228 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:23.228 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:23.228 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:23.228 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:23.228 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:23.228 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:23.228 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:23.228 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:23.228 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:23.228 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:23.228 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.228 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.487 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:23.487 "name": "raid_bdev1", 00:28:23.487 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:23.487 "strip_size_kb": 0, 00:28:23.487 "state": "online", 00:28:23.487 "raid_level": "raid1", 00:28:23.487 "superblock": true, 00:28:23.487 "num_base_bdevs": 2, 00:28:23.487 "num_base_bdevs_discovered": 1, 00:28:23.487 "num_base_bdevs_operational": 1, 00:28:23.487 "base_bdevs_list": [ 00:28:23.487 { 00:28:23.487 "name": null, 00:28:23.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.487 "is_configured": false, 00:28:23.487 "data_offset": 256, 00:28:23.487 "data_size": 7936 00:28:23.487 }, 00:28:23.487 { 00:28:23.487 "name": "BaseBdev2", 00:28:23.487 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:23.487 "is_configured": true, 00:28:23.487 "data_offset": 256, 00:28:23.487 "data_size": 7936 00:28:23.487 } 00:28:23.487 ] 00:28:23.487 }' 00:28:23.487 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:23.487 13:28:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:24.054 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:24.054 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:24.054 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:24.054 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:24.054 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:24.054 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.054 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.313 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:24.313 "name": "raid_bdev1", 00:28:24.313 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:24.313 "strip_size_kb": 0, 00:28:24.313 "state": "online", 00:28:24.313 "raid_level": "raid1", 00:28:24.313 "superblock": true, 00:28:24.313 "num_base_bdevs": 2, 00:28:24.313 "num_base_bdevs_discovered": 1, 00:28:24.313 "num_base_bdevs_operational": 1, 00:28:24.313 "base_bdevs_list": [ 00:28:24.313 { 00:28:24.313 "name": null, 00:28:24.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.313 "is_configured": false, 00:28:24.313 "data_offset": 256, 00:28:24.313 "data_size": 7936 00:28:24.313 }, 00:28:24.313 { 00:28:24.313 "name": "BaseBdev2", 00:28:24.313 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:24.313 "is_configured": true, 00:28:24.313 "data_offset": 256, 00:28:24.313 "data_size": 7936 00:28:24.313 } 00:28:24.313 ] 00:28:24.313 }' 00:28:24.313 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:24.313 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:24.313 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:24.313 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:24.313 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:24.572 [2024-07-26 13:28:04.886465] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:24.572 [2024-07-26 13:28:04.889899] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17eeb30 00:28:24.572 [2024-07-26 13:28:04.891260] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:24.572 13:28:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@678 -- # sleep 1 00:28:25.509 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:25.509 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:25.509 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:25.509 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:25.509 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:25.509 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.509 13:28:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:25.768 "name": "raid_bdev1", 00:28:25.768 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:25.768 "strip_size_kb": 0, 00:28:25.768 "state": "online", 00:28:25.768 "raid_level": "raid1", 00:28:25.768 "superblock": true, 00:28:25.768 "num_base_bdevs": 2, 00:28:25.768 "num_base_bdevs_discovered": 2, 00:28:25.768 "num_base_bdevs_operational": 2, 00:28:25.768 "process": { 00:28:25.768 "type": "rebuild", 00:28:25.768 "target": "spare", 00:28:25.768 "progress": { 00:28:25.768 "blocks": 3072, 00:28:25.768 "percent": 38 00:28:25.768 } 00:28:25.768 }, 00:28:25.768 "base_bdevs_list": [ 00:28:25.768 { 00:28:25.768 "name": "spare", 00:28:25.768 "uuid": "e9d1338a-0b78-5887-8a49-8ecbf63f6185", 00:28:25.768 "is_configured": true, 00:28:25.768 "data_offset": 256, 00:28:25.768 "data_size": 7936 00:28:25.768 }, 00:28:25.768 { 00:28:25.768 "name": "BaseBdev2", 00:28:25.768 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:25.768 "is_configured": true, 00:28:25.768 "data_offset": 256, 00:28:25.768 "data_size": 7936 00:28:25.768 } 00:28:25.768 ] 00:28:25.768 }' 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:28:25.768 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # local timeout=1072 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.768 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.027 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:26.027 "name": "raid_bdev1", 00:28:26.027 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:26.027 "strip_size_kb": 0, 00:28:26.027 "state": "online", 00:28:26.027 "raid_level": "raid1", 00:28:26.027 "superblock": true, 00:28:26.027 "num_base_bdevs": 2, 00:28:26.027 "num_base_bdevs_discovered": 2, 00:28:26.027 "num_base_bdevs_operational": 2, 00:28:26.027 "process": { 00:28:26.027 "type": "rebuild", 00:28:26.027 "target": "spare", 00:28:26.027 "progress": { 00:28:26.027 "blocks": 3840, 00:28:26.027 "percent": 48 00:28:26.027 } 00:28:26.027 }, 00:28:26.027 "base_bdevs_list": [ 00:28:26.027 { 00:28:26.027 "name": "spare", 00:28:26.027 "uuid": "e9d1338a-0b78-5887-8a49-8ecbf63f6185", 00:28:26.027 "is_configured": true, 00:28:26.027 "data_offset": 256, 00:28:26.027 "data_size": 7936 00:28:26.027 }, 00:28:26.027 { 00:28:26.027 "name": "BaseBdev2", 00:28:26.027 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:26.027 "is_configured": true, 00:28:26.027 "data_offset": 256, 00:28:26.027 "data_size": 7936 00:28:26.027 } 00:28:26.027 ] 00:28:26.027 }' 00:28:26.027 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:26.027 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:26.027 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:26.027 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:26.027 13:28:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:27.403 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:27.403 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:27.403 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:27.403 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:27.403 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:27.403 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:27.403 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.403 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.403 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:27.403 "name": "raid_bdev1", 00:28:27.403 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:27.403 "strip_size_kb": 0, 00:28:27.403 "state": "online", 00:28:27.403 "raid_level": "raid1", 00:28:27.403 "superblock": true, 00:28:27.403 "num_base_bdevs": 2, 00:28:27.403 "num_base_bdevs_discovered": 2, 00:28:27.403 "num_base_bdevs_operational": 2, 00:28:27.403 "process": { 00:28:27.403 "type": "rebuild", 00:28:27.403 "target": "spare", 00:28:27.403 "progress": { 00:28:27.403 "blocks": 7168, 00:28:27.403 "percent": 90 00:28:27.404 } 00:28:27.404 }, 00:28:27.404 "base_bdevs_list": [ 00:28:27.404 { 00:28:27.404 "name": "spare", 00:28:27.404 "uuid": "e9d1338a-0b78-5887-8a49-8ecbf63f6185", 00:28:27.404 "is_configured": true, 00:28:27.404 "data_offset": 256, 00:28:27.404 "data_size": 7936 00:28:27.404 }, 00:28:27.404 { 00:28:27.404 "name": "BaseBdev2", 00:28:27.404 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:27.404 "is_configured": true, 00:28:27.404 "data_offset": 256, 00:28:27.404 "data_size": 7936 00:28:27.404 } 00:28:27.404 ] 00:28:27.404 }' 00:28:27.404 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:27.404 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:27.404 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:27.404 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:27.404 13:28:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:27.662 [2024-07-26 13:28:08.013705] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:27.662 [2024-07-26 13:28:08.013756] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:27.662 [2024-07-26 13:28:08.013832] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:28.597 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:28.597 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:28.597 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:28.597 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:28.597 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:28.597 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:28.597 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.597 13:28:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.597 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:28.597 "name": "raid_bdev1", 00:28:28.597 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:28.597 "strip_size_kb": 0, 00:28:28.597 "state": "online", 00:28:28.597 "raid_level": "raid1", 00:28:28.597 "superblock": true, 00:28:28.597 "num_base_bdevs": 2, 00:28:28.597 "num_base_bdevs_discovered": 2, 00:28:28.597 "num_base_bdevs_operational": 2, 00:28:28.597 "base_bdevs_list": [ 00:28:28.597 { 00:28:28.597 "name": "spare", 00:28:28.597 "uuid": "e9d1338a-0b78-5887-8a49-8ecbf63f6185", 00:28:28.597 "is_configured": true, 00:28:28.597 "data_offset": 256, 00:28:28.597 "data_size": 7936 00:28:28.597 }, 00:28:28.597 { 00:28:28.597 "name": "BaseBdev2", 00:28:28.597 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:28.597 "is_configured": true, 00:28:28.597 "data_offset": 256, 00:28:28.597 "data_size": 7936 00:28:28.597 } 00:28:28.597 ] 00:28:28.597 }' 00:28:28.597 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:28.856 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:28.856 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:28.856 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:28.856 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # break 00:28:28.856 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:28.856 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:28.856 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:28.856 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:28.856 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:28.856 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.856 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:29.115 "name": "raid_bdev1", 00:28:29.115 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:29.115 "strip_size_kb": 0, 00:28:29.115 "state": "online", 00:28:29.115 "raid_level": "raid1", 00:28:29.115 "superblock": true, 00:28:29.115 "num_base_bdevs": 2, 00:28:29.115 "num_base_bdevs_discovered": 2, 00:28:29.115 "num_base_bdevs_operational": 2, 00:28:29.115 "base_bdevs_list": [ 00:28:29.115 { 00:28:29.115 "name": "spare", 00:28:29.115 "uuid": "e9d1338a-0b78-5887-8a49-8ecbf63f6185", 00:28:29.115 "is_configured": true, 00:28:29.115 "data_offset": 256, 00:28:29.115 "data_size": 7936 00:28:29.115 }, 00:28:29.115 { 00:28:29.115 "name": "BaseBdev2", 00:28:29.115 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:29.115 "is_configured": true, 00:28:29.115 "data_offset": 256, 00:28:29.115 "data_size": 7936 00:28:29.115 } 00:28:29.115 ] 00:28:29.115 }' 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.115 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.374 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:29.374 "name": "raid_bdev1", 00:28:29.374 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:29.374 "strip_size_kb": 0, 00:28:29.374 "state": "online", 00:28:29.374 "raid_level": "raid1", 00:28:29.374 "superblock": true, 00:28:29.374 "num_base_bdevs": 2, 00:28:29.374 "num_base_bdevs_discovered": 2, 00:28:29.374 "num_base_bdevs_operational": 2, 00:28:29.374 "base_bdevs_list": [ 00:28:29.374 { 00:28:29.374 "name": "spare", 00:28:29.374 "uuid": "e9d1338a-0b78-5887-8a49-8ecbf63f6185", 00:28:29.374 "is_configured": true, 00:28:29.374 "data_offset": 256, 00:28:29.374 "data_size": 7936 00:28:29.374 }, 00:28:29.374 { 00:28:29.374 "name": "BaseBdev2", 00:28:29.374 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:29.374 "is_configured": true, 00:28:29.374 "data_offset": 256, 00:28:29.374 "data_size": 7936 00:28:29.374 } 00:28:29.374 ] 00:28:29.374 }' 00:28:29.374 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:29.374 13:28:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:29.942 13:28:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:30.201 [2024-07-26 13:28:10.521091] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:30.201 [2024-07-26 13:28:10.521116] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:30.201 [2024-07-26 13:28:10.521176] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:30.201 [2024-07-26 13:28:10.521229] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:30.201 [2024-07-26 13:28:10.521240] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17ec2b0 name raid_bdev1, state offline 00:28:30.201 13:28:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.201 13:28:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # jq length 00:28:30.460 13:28:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:28:30.460 13:28:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@737 -- # '[' false = true ']' 00:28:30.460 13:28:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:28:30.460 13:28:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:30.719 13:28:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:30.719 [2024-07-26 13:28:11.198838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:30.719 [2024-07-26 13:28:11.198877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:30.719 [2024-07-26 13:28:11.198894] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17ee450 00:28:30.719 [2024-07-26 13:28:11.198905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:30.719 [2024-07-26 13:28:11.200498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:30.719 [2024-07-26 13:28:11.200525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:30.719 [2024-07-26 13:28:11.200577] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:30.719 [2024-07-26 13:28:11.200602] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:30.719 [2024-07-26 13:28:11.200682] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:30.719 spare 00:28:30.719 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:30.719 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:30.719 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:30.719 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:30.719 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:30.719 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:30.719 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:30.719 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:30.719 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:30.719 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:30.719 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.719 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.978 [2024-07-26 13:28:11.300983] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x17de870 00:28:30.978 [2024-07-26 13:28:11.300999] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:28:30.978 [2024-07-26 13:28:11.301072] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1650850 00:28:30.978 [2024-07-26 13:28:11.301164] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17de870 00:28:30.978 [2024-07-26 13:28:11.301175] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x17de870 00:28:30.978 [2024-07-26 13:28:11.301239] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:30.978 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:30.978 "name": "raid_bdev1", 00:28:30.978 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:30.978 "strip_size_kb": 0, 00:28:30.978 "state": "online", 00:28:30.978 "raid_level": "raid1", 00:28:30.978 "superblock": true, 00:28:30.978 "num_base_bdevs": 2, 00:28:30.978 "num_base_bdevs_discovered": 2, 00:28:30.978 "num_base_bdevs_operational": 2, 00:28:30.978 "base_bdevs_list": [ 00:28:30.978 { 00:28:30.978 "name": "spare", 00:28:30.978 "uuid": "e9d1338a-0b78-5887-8a49-8ecbf63f6185", 00:28:30.978 "is_configured": true, 00:28:30.978 "data_offset": 256, 00:28:30.978 "data_size": 7936 00:28:30.979 }, 00:28:30.979 { 00:28:30.979 "name": "BaseBdev2", 00:28:30.979 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:30.979 "is_configured": true, 00:28:30.979 "data_offset": 256, 00:28:30.979 "data_size": 7936 00:28:30.979 } 00:28:30.979 ] 00:28:30.979 }' 00:28:30.979 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:30.979 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:31.546 13:28:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:31.546 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:31.546 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:31.546 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:31.546 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:31.546 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.546 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.805 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:31.805 "name": "raid_bdev1", 00:28:31.805 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:31.805 "strip_size_kb": 0, 00:28:31.805 "state": "online", 00:28:31.805 "raid_level": "raid1", 00:28:31.805 "superblock": true, 00:28:31.805 "num_base_bdevs": 2, 00:28:31.805 "num_base_bdevs_discovered": 2, 00:28:31.805 "num_base_bdevs_operational": 2, 00:28:31.805 "base_bdevs_list": [ 00:28:31.805 { 00:28:31.805 "name": "spare", 00:28:31.805 "uuid": "e9d1338a-0b78-5887-8a49-8ecbf63f6185", 00:28:31.805 "is_configured": true, 00:28:31.805 "data_offset": 256, 00:28:31.805 "data_size": 7936 00:28:31.805 }, 00:28:31.805 { 00:28:31.805 "name": "BaseBdev2", 00:28:31.805 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:31.805 "is_configured": true, 00:28:31.805 "data_offset": 256, 00:28:31.805 "data_size": 7936 00:28:31.805 } 00:28:31.805 ] 00:28:31.805 }' 00:28:31.805 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:31.805 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:31.805 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:31.805 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:31.805 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.805 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:32.064 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:28:32.064 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:32.323 [2024-07-26 13:28:12.698880] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:32.323 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:32.323 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:32.323 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:32.323 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:32.323 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:32.323 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:32.323 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:32.323 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:32.323 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:32.323 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:32.323 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.323 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.582 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:32.582 "name": "raid_bdev1", 00:28:32.582 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:32.582 "strip_size_kb": 0, 00:28:32.582 "state": "online", 00:28:32.582 "raid_level": "raid1", 00:28:32.582 "superblock": true, 00:28:32.582 "num_base_bdevs": 2, 00:28:32.582 "num_base_bdevs_discovered": 1, 00:28:32.582 "num_base_bdevs_operational": 1, 00:28:32.582 "base_bdevs_list": [ 00:28:32.582 { 00:28:32.582 "name": null, 00:28:32.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:32.582 "is_configured": false, 00:28:32.582 "data_offset": 256, 00:28:32.582 "data_size": 7936 00:28:32.582 }, 00:28:32.582 { 00:28:32.582 "name": "BaseBdev2", 00:28:32.582 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:32.582 "is_configured": true, 00:28:32.582 "data_offset": 256, 00:28:32.582 "data_size": 7936 00:28:32.582 } 00:28:32.582 ] 00:28:32.582 }' 00:28:32.582 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:32.582 13:28:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:33.149 13:28:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:33.408 [2024-07-26 13:28:13.741643] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:33.408 [2024-07-26 13:28:13.741777] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:33.408 [2024-07-26 13:28:13.741791] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:33.408 [2024-07-26 13:28:13.741817] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:33.408 [2024-07-26 13:28:13.745192] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17f07e0 00:28:33.408 [2024-07-26 13:28:13.747301] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:33.408 13:28:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # sleep 1 00:28:34.423 13:28:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:34.423 13:28:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:34.423 13:28:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:34.423 13:28:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:34.423 13:28:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:34.423 13:28:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.423 13:28:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.683 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:34.683 "name": "raid_bdev1", 00:28:34.683 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:34.683 "strip_size_kb": 0, 00:28:34.683 "state": "online", 00:28:34.683 "raid_level": "raid1", 00:28:34.683 "superblock": true, 00:28:34.683 "num_base_bdevs": 2, 00:28:34.683 "num_base_bdevs_discovered": 2, 00:28:34.683 "num_base_bdevs_operational": 2, 00:28:34.683 "process": { 00:28:34.683 "type": "rebuild", 00:28:34.683 "target": "spare", 00:28:34.683 "progress": { 00:28:34.683 "blocks": 3072, 00:28:34.683 "percent": 38 00:28:34.683 } 00:28:34.683 }, 00:28:34.683 "base_bdevs_list": [ 00:28:34.683 { 00:28:34.683 "name": "spare", 00:28:34.683 "uuid": "e9d1338a-0b78-5887-8a49-8ecbf63f6185", 00:28:34.683 "is_configured": true, 00:28:34.683 "data_offset": 256, 00:28:34.683 "data_size": 7936 00:28:34.683 }, 00:28:34.683 { 00:28:34.683 "name": "BaseBdev2", 00:28:34.683 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:34.683 "is_configured": true, 00:28:34.683 "data_offset": 256, 00:28:34.683 "data_size": 7936 00:28:34.683 } 00:28:34.683 ] 00:28:34.683 }' 00:28:34.683 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:34.683 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:34.683 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:34.683 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:34.683 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:34.942 [2024-07-26 13:28:15.299929] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:34.942 [2024-07-26 13:28:15.359033] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:34.942 [2024-07-26 13:28:15.359075] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:34.942 [2024-07-26 13:28:15.359089] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:34.943 [2024-07-26 13:28:15.359097] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:34.943 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:34.943 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:34.943 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:34.943 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:34.943 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:34.943 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:34.943 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:34.943 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:34.943 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:34.943 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:34.943 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.943 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.202 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:35.202 "name": "raid_bdev1", 00:28:35.202 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:35.202 "strip_size_kb": 0, 00:28:35.202 "state": "online", 00:28:35.202 "raid_level": "raid1", 00:28:35.202 "superblock": true, 00:28:35.202 "num_base_bdevs": 2, 00:28:35.202 "num_base_bdevs_discovered": 1, 00:28:35.202 "num_base_bdevs_operational": 1, 00:28:35.202 "base_bdevs_list": [ 00:28:35.202 { 00:28:35.202 "name": null, 00:28:35.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:35.202 "is_configured": false, 00:28:35.202 "data_offset": 256, 00:28:35.202 "data_size": 7936 00:28:35.202 }, 00:28:35.202 { 00:28:35.202 "name": "BaseBdev2", 00:28:35.202 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:35.202 "is_configured": true, 00:28:35.202 "data_offset": 256, 00:28:35.202 "data_size": 7936 00:28:35.202 } 00:28:35.202 ] 00:28:35.202 }' 00:28:35.202 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:35.202 13:28:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:35.771 13:28:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:36.031 [2024-07-26 13:28:16.345206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:36.031 [2024-07-26 13:28:16.345250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:36.031 [2024-07-26 13:28:16.345269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1650620 00:28:36.031 [2024-07-26 13:28:16.345280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:36.031 [2024-07-26 13:28:16.345451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:36.031 [2024-07-26 13:28:16.345466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:36.031 [2024-07-26 13:28:16.345517] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:36.031 [2024-07-26 13:28:16.345527] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:36.031 [2024-07-26 13:28:16.345537] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:36.031 [2024-07-26 13:28:16.345557] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:36.031 [2024-07-26 13:28:16.348898] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17f07e0 00:28:36.031 [2024-07-26 13:28:16.350257] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:36.031 spare 00:28:36.031 13:28:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # sleep 1 00:28:36.968 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:36.968 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:36.968 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:36.968 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:36.968 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:36.968 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.968 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.228 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:37.228 "name": "raid_bdev1", 00:28:37.228 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:37.228 "strip_size_kb": 0, 00:28:37.228 "state": "online", 00:28:37.228 "raid_level": "raid1", 00:28:37.228 "superblock": true, 00:28:37.228 "num_base_bdevs": 2, 00:28:37.228 "num_base_bdevs_discovered": 2, 00:28:37.228 "num_base_bdevs_operational": 2, 00:28:37.228 "process": { 00:28:37.228 "type": "rebuild", 00:28:37.228 "target": "spare", 00:28:37.228 "progress": { 00:28:37.228 "blocks": 2816, 00:28:37.228 "percent": 35 00:28:37.228 } 00:28:37.228 }, 00:28:37.228 "base_bdevs_list": [ 00:28:37.228 { 00:28:37.228 "name": "spare", 00:28:37.228 "uuid": "e9d1338a-0b78-5887-8a49-8ecbf63f6185", 00:28:37.228 "is_configured": true, 00:28:37.228 "data_offset": 256, 00:28:37.228 "data_size": 7936 00:28:37.228 }, 00:28:37.228 { 00:28:37.228 "name": "BaseBdev2", 00:28:37.228 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:37.228 "is_configured": true, 00:28:37.228 "data_offset": 256, 00:28:37.228 "data_size": 7936 00:28:37.228 } 00:28:37.228 ] 00:28:37.228 }' 00:28:37.228 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:37.228 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:37.228 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:37.228 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:37.228 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:37.488 [2024-07-26 13:28:17.847150] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:37.488 [2024-07-26 13:28:17.861252] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:37.488 [2024-07-26 13:28:17.861294] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:37.488 [2024-07-26 13:28:17.861309] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:37.488 [2024-07-26 13:28:17.861317] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:37.488 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:37.488 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:37.488 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:37.488 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:37.488 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:37.488 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:37.488 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:37.488 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:37.488 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:37.488 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:37.488 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.488 13:28:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.747 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:37.747 "name": "raid_bdev1", 00:28:37.747 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:37.747 "strip_size_kb": 0, 00:28:37.747 "state": "online", 00:28:37.747 "raid_level": "raid1", 00:28:37.747 "superblock": true, 00:28:37.747 "num_base_bdevs": 2, 00:28:37.747 "num_base_bdevs_discovered": 1, 00:28:37.747 "num_base_bdevs_operational": 1, 00:28:37.747 "base_bdevs_list": [ 00:28:37.747 { 00:28:37.747 "name": null, 00:28:37.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.747 "is_configured": false, 00:28:37.747 "data_offset": 256, 00:28:37.747 "data_size": 7936 00:28:37.747 }, 00:28:37.747 { 00:28:37.747 "name": "BaseBdev2", 00:28:37.747 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:37.747 "is_configured": true, 00:28:37.747 "data_offset": 256, 00:28:37.747 "data_size": 7936 00:28:37.747 } 00:28:37.747 ] 00:28:37.747 }' 00:28:37.747 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:37.747 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:38.316 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:38.316 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:38.316 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:38.316 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:38.316 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.316 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.316 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.575 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:38.575 "name": "raid_bdev1", 00:28:38.575 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:38.575 "strip_size_kb": 0, 00:28:38.575 "state": "online", 00:28:38.575 "raid_level": "raid1", 00:28:38.575 "superblock": true, 00:28:38.575 "num_base_bdevs": 2, 00:28:38.575 "num_base_bdevs_discovered": 1, 00:28:38.575 "num_base_bdevs_operational": 1, 00:28:38.575 "base_bdevs_list": [ 00:28:38.575 { 00:28:38.575 "name": null, 00:28:38.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.575 "is_configured": false, 00:28:38.575 "data_offset": 256, 00:28:38.575 "data_size": 7936 00:28:38.575 }, 00:28:38.575 { 00:28:38.575 "name": "BaseBdev2", 00:28:38.575 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:38.575 "is_configured": true, 00:28:38.575 "data_offset": 256, 00:28:38.575 "data_size": 7936 00:28:38.575 } 00:28:38.575 ] 00:28:38.575 }' 00:28:38.575 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:38.575 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:38.575 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:38.576 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:38.576 13:28:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:38.835 13:28:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:39.094 [2024-07-26 13:28:19.428894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:39.094 [2024-07-26 13:28:19.428936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:39.094 [2024-07-26 13:28:19.428953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17deb00 00:28:39.094 [2024-07-26 13:28:19.428964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:39.094 [2024-07-26 13:28:19.429109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:39.094 [2024-07-26 13:28:19.429124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:39.094 [2024-07-26 13:28:19.429173] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:39.094 [2024-07-26 13:28:19.429184] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:39.094 [2024-07-26 13:28:19.429194] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:39.094 BaseBdev1 00:28:39.094 13:28:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # sleep 1 00:28:40.032 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:40.032 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:40.032 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:40.032 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:40.032 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:40.032 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:40.032 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:40.032 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:40.032 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:40.032 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:40.032 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.032 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.291 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:40.291 "name": "raid_bdev1", 00:28:40.291 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:40.291 "strip_size_kb": 0, 00:28:40.291 "state": "online", 00:28:40.291 "raid_level": "raid1", 00:28:40.291 "superblock": true, 00:28:40.291 "num_base_bdevs": 2, 00:28:40.291 "num_base_bdevs_discovered": 1, 00:28:40.291 "num_base_bdevs_operational": 1, 00:28:40.291 "base_bdevs_list": [ 00:28:40.291 { 00:28:40.291 "name": null, 00:28:40.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.291 "is_configured": false, 00:28:40.291 "data_offset": 256, 00:28:40.291 "data_size": 7936 00:28:40.291 }, 00:28:40.291 { 00:28:40.291 "name": "BaseBdev2", 00:28:40.291 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:40.291 "is_configured": true, 00:28:40.291 "data_offset": 256, 00:28:40.291 "data_size": 7936 00:28:40.291 } 00:28:40.291 ] 00:28:40.291 }' 00:28:40.291 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:40.291 13:28:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:40.859 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:40.859 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:40.859 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:40.859 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:40.859 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:40.859 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.859 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:41.120 "name": "raid_bdev1", 00:28:41.120 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:41.120 "strip_size_kb": 0, 00:28:41.120 "state": "online", 00:28:41.120 "raid_level": "raid1", 00:28:41.120 "superblock": true, 00:28:41.120 "num_base_bdevs": 2, 00:28:41.120 "num_base_bdevs_discovered": 1, 00:28:41.120 "num_base_bdevs_operational": 1, 00:28:41.120 "base_bdevs_list": [ 00:28:41.120 { 00:28:41.120 "name": null, 00:28:41.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.120 "is_configured": false, 00:28:41.120 "data_offset": 256, 00:28:41.120 "data_size": 7936 00:28:41.120 }, 00:28:41.120 { 00:28:41.120 "name": "BaseBdev2", 00:28:41.120 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:41.120 "is_configured": true, 00:28:41.120 "data_offset": 256, 00:28:41.120 "data_size": 7936 00:28:41.120 } 00:28:41.120 ] 00:28:41.120 }' 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:41.120 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:41.380 [2024-07-26 13:28:21.783177] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:41.380 [2024-07-26 13:28:21.783286] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:41.380 [2024-07-26 13:28:21.783301] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:41.380 request: 00:28:41.380 { 00:28:41.380 "base_bdev": "BaseBdev1", 00:28:41.380 "raid_bdev": "raid_bdev1", 00:28:41.380 "method": "bdev_raid_add_base_bdev", 00:28:41.380 "req_id": 1 00:28:41.380 } 00:28:41.380 Got JSON-RPC error response 00:28:41.380 response: 00:28:41.380 { 00:28:41.380 "code": -22, 00:28:41.380 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:41.380 } 00:28:41.380 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:28:41.380 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:41.380 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:41.380 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:41.380 13:28:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@793 -- # sleep 1 00:28:42.315 13:28:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:42.315 13:28:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:42.316 13:28:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:42.316 13:28:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:42.316 13:28:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:42.316 13:28:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:42.316 13:28:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:42.316 13:28:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:42.316 13:28:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:42.316 13:28:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:42.316 13:28:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.316 13:28:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.575 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:42.575 "name": "raid_bdev1", 00:28:42.575 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:42.575 "strip_size_kb": 0, 00:28:42.575 "state": "online", 00:28:42.575 "raid_level": "raid1", 00:28:42.575 "superblock": true, 00:28:42.575 "num_base_bdevs": 2, 00:28:42.575 "num_base_bdevs_discovered": 1, 00:28:42.575 "num_base_bdevs_operational": 1, 00:28:42.575 "base_bdevs_list": [ 00:28:42.575 { 00:28:42.575 "name": null, 00:28:42.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:42.575 "is_configured": false, 00:28:42.575 "data_offset": 256, 00:28:42.575 "data_size": 7936 00:28:42.575 }, 00:28:42.575 { 00:28:42.575 "name": "BaseBdev2", 00:28:42.575 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:42.575 "is_configured": true, 00:28:42.575 "data_offset": 256, 00:28:42.575 "data_size": 7936 00:28:42.575 } 00:28:42.575 ] 00:28:42.575 }' 00:28:42.575 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:42.575 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:43.143 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:43.143 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:43.143 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:43.143 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:43.143 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:43.143 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.143 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.402 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:43.402 "name": "raid_bdev1", 00:28:43.402 "uuid": "d3bfd42f-2c66-4222-90b4-4ea51747f4e4", 00:28:43.402 "strip_size_kb": 0, 00:28:43.402 "state": "online", 00:28:43.402 "raid_level": "raid1", 00:28:43.402 "superblock": true, 00:28:43.402 "num_base_bdevs": 2, 00:28:43.402 "num_base_bdevs_discovered": 1, 00:28:43.402 "num_base_bdevs_operational": 1, 00:28:43.402 "base_bdevs_list": [ 00:28:43.402 { 00:28:43.402 "name": null, 00:28:43.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:43.402 "is_configured": false, 00:28:43.402 "data_offset": 256, 00:28:43.402 "data_size": 7936 00:28:43.402 }, 00:28:43.402 { 00:28:43.402 "name": "BaseBdev2", 00:28:43.402 "uuid": "40963802-50b5-57ca-893b-4ae20f71d674", 00:28:43.402 "is_configured": true, 00:28:43.402 "data_offset": 256, 00:28:43.402 "data_size": 7936 00:28:43.402 } 00:28:43.402 ] 00:28:43.402 }' 00:28:43.402 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:43.402 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:43.402 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:43.662 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:43.662 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@798 -- # killprocess 844975 00:28:43.662 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 844975 ']' 00:28:43.662 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 844975 00:28:43.662 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:28:43.662 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:43.662 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 844975 00:28:43.662 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:43.662 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:43.662 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 844975' 00:28:43.662 killing process with pid 844975 00:28:43.662 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 844975 00:28:43.662 Received shutdown signal, test time was about 60.000000 seconds 00:28:43.662 00:28:43.662 Latency(us) 00:28:43.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.662 =================================================================================================================== 00:28:43.662 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:43.662 [2024-07-26 13:28:23.989080] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:43.662 [2024-07-26 13:28:23.989169] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:43.662 [2024-07-26 13:28:23.989208] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:43.662 [2024-07-26 13:28:23.989220] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17de870 name raid_bdev1, state offline 00:28:43.662 13:28:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 844975 00:28:43.662 [2024-07-26 13:28:24.014043] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:43.922 13:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@800 -- # return 0 00:28:43.922 00:28:43.922 real 0m27.393s 00:28:43.922 user 0m43.207s 00:28:43.922 sys 0m3.679s 00:28:43.922 13:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:43.922 13:28:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:28:43.922 ************************************ 00:28:43.922 END TEST raid_rebuild_test_sb_md_interleaved 00:28:43.922 ************************************ 00:28:43.922 13:28:24 bdev_raid -- bdev/bdev_raid.sh@996 -- # trap - EXIT 00:28:43.922 13:28:24 bdev_raid -- bdev/bdev_raid.sh@997 -- # cleanup 00:28:43.922 13:28:24 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 844975 ']' 00:28:43.922 13:28:24 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 844975 00:28:43.922 13:28:24 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:28:43.922 00:28:43.922 real 17m39.911s 00:28:43.922 user 29m53.200s 00:28:43.922 sys 3m12.098s 00:28:43.922 13:28:24 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:43.922 13:28:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:43.922 ************************************ 00:28:43.922 END TEST bdev_raid 00:28:43.922 ************************************ 00:28:43.922 13:28:24 -- spdk/autotest.sh@195 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:28:43.922 13:28:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:43.922 13:28:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:43.922 13:28:24 -- common/autotest_common.sh@10 -- # set +x 00:28:43.922 ************************************ 00:28:43.922 START TEST bdevperf_config 00:28:43.922 ************************************ 00:28:43.922 13:28:24 bdevperf_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:28:44.181 * Looking for test storage... 00:28:44.181 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:28:44.181 13:28:24 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:28:44.181 13:28:24 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:44.182 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:44.182 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:44.182 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:44.182 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:44.182 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:44.182 13:28:24 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:28:46.721 13:28:27 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-26 13:28:24.583806] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:46.721 [2024-07-26 13:28:24.583867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850173 ] 00:28:46.721 Using job config with 4 jobs 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:46.721 [2024-07-26 13:28:24.716596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.721 [2024-07-26 13:28:24.818239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.721 cpumask for '\''job0'\'' is too big 00:28:46.721 cpumask for '\''job1'\'' is too big 00:28:46.721 cpumask for '\''job2'\'' is too big 00:28:46.721 cpumask for '\''job3'\'' is too big 00:28:46.721 Running I/O for 2 seconds... 00:28:46.721 00:28:46.721 Latency(us) 00:28:46.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.721 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:46.721 Malloc0 : 2.01 26063.32 25.45 0.00 0.00 9817.62 1703.94 14994.64 00:28:46.721 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:46.721 Malloc0 : 2.02 26041.23 25.43 0.00 0.00 9805.61 1690.83 13264.49 00:28:46.721 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:46.721 Malloc0 : 2.02 26082.18 25.47 0.00 0.00 9770.55 1677.72 11586.76 00:28:46.721 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:46.721 Malloc0 : 2.02 26060.20 25.45 0.00 0.00 9758.34 1690.83 10118.76 00:28:46.721 =================================================================================================================== 00:28:46.721 Total : 104246.93 101.80 0.00 0.00 9787.98 1677.72 14994.64' 00:28:46.721 13:28:27 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-26 13:28:24.583806] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:46.721 [2024-07-26 13:28:24.583867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850173 ] 00:28:46.721 Using job config with 4 jobs 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.721 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:46.721 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:46.722 [2024-07-26 13:28:24.716596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.722 [2024-07-26 13:28:24.818239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.722 cpumask for '\''job0'\'' is too big 00:28:46.722 cpumask for '\''job1'\'' is too big 00:28:46.722 cpumask for '\''job2'\'' is too big 00:28:46.722 cpumask for '\''job3'\'' is too big 00:28:46.722 Running I/O for 2 seconds... 00:28:46.722 00:28:46.722 Latency(us) 00:28:46.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.722 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:46.722 Malloc0 : 2.01 26063.32 25.45 0.00 0.00 9817.62 1703.94 14994.64 00:28:46.722 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:46.722 Malloc0 : 2.02 26041.23 25.43 0.00 0.00 9805.61 1690.83 13264.49 00:28:46.722 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:46.722 Malloc0 : 2.02 26082.18 25.47 0.00 0.00 9770.55 1677.72 11586.76 00:28:46.722 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:46.722 Malloc0 : 2.02 26060.20 25.45 0.00 0.00 9758.34 1690.83 10118.76 00:28:46.722 =================================================================================================================== 00:28:46.722 Total : 104246.93 101.80 0.00 0.00 9787.98 1677.72 14994.64' 00:28:46.722 13:28:27 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-26 13:28:24.583806] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:46.722 [2024-07-26 13:28:24.583867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850173 ] 00:28:46.722 Using job config with 4 jobs 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:46.722 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.722 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:46.722 [2024-07-26 13:28:24.716596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.722 [2024-07-26 13:28:24.818239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.722 cpumask for '\''job0'\'' is too big 00:28:46.722 cpumask for '\''job1'\'' is too big 00:28:46.722 cpumask for '\''job2'\'' is too big 00:28:46.722 cpumask for '\''job3'\'' is too big 00:28:46.723 Running I/O for 2 seconds... 00:28:46.723 00:28:46.723 Latency(us) 00:28:46.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.723 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:46.723 Malloc0 : 2.01 26063.32 25.45 0.00 0.00 9817.62 1703.94 14994.64 00:28:46.723 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:46.723 Malloc0 : 2.02 26041.23 25.43 0.00 0.00 9805.61 1690.83 13264.49 00:28:46.723 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:46.723 Malloc0 : 2.02 26082.18 25.47 0.00 0.00 9770.55 1677.72 11586.76 00:28:46.723 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:46.723 Malloc0 : 2.02 26060.20 25.45 0.00 0.00 9758.34 1690.83 10118.76 00:28:46.723 =================================================================================================================== 00:28:46.723 Total : 104246.93 101.80 0.00 0.00 9787.98 1677.72 14994.64' 00:28:46.723 13:28:27 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:28:46.723 13:28:27 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:28:46.723 13:28:27 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:28:46.723 13:28:27 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:28:46.983 [2024-07-26 13:28:27.275657] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:46.983 [2024-07-26 13:28:27.275726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850473 ] 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:46.983 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:46.983 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:46.983 [2024-07-26 13:28:27.421406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.243 [2024-07-26 13:28:27.525581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.243 cpumask for 'job0' is too big 00:28:47.243 cpumask for 'job1' is too big 00:28:47.243 cpumask for 'job2' is too big 00:28:47.243 cpumask for 'job3' is too big 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:28:49.780 Running I/O for 2 seconds... 00:28:49.780 00:28:49.780 Latency(us) 00:28:49.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.780 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:49.780 Malloc0 : 2.01 25793.00 25.19 0.00 0.00 9920.04 1703.94 15204.35 00:28:49.780 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:49.780 Malloc0 : 2.02 25770.24 25.17 0.00 0.00 9907.69 1677.72 13421.77 00:28:49.780 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:49.780 Malloc0 : 2.02 25747.61 25.14 0.00 0.00 9895.16 1690.83 11691.62 00:28:49.780 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:28:49.780 Malloc0 : 2.02 25819.53 25.21 0.00 0.00 9847.33 871.63 10328.47 00:28:49.780 =================================================================================================================== 00:28:49.780 Total : 103130.38 100.71 0.00 0.00 9892.50 871.63 15204.35' 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:49.780 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:49.780 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:49.780 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:49.780 13:28:29 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:28:52.357 13:28:32 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-26 13:28:29.989216] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:52.357 [2024-07-26 13:28:29.989279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850984 ] 00:28:52.357 Using job config with 3 jobs 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:52.357 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.357 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:52.357 [2024-07-26 13:28:30.133244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.357 [2024-07-26 13:28:30.234680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.357 cpumask for '\''job0'\'' is too big 00:28:52.357 cpumask for '\''job1'\'' is too big 00:28:52.357 cpumask for '\''job2'\'' is too big 00:28:52.357 Running I/O for 2 seconds... 00:28:52.357 00:28:52.357 Latency(us) 00:28:52.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.357 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:52.357 Malloc0 : 2.01 34975.13 34.16 0.00 0.00 7311.66 1671.17 10747.90 00:28:52.357 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:52.357 Malloc0 : 2.01 34945.27 34.13 0.00 0.00 7303.38 1651.51 9070.18 00:28:52.357 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:52.357 Malloc0 : 2.02 34915.53 34.10 0.00 0.00 7294.51 1651.51 7654.60 00:28:52.358 =================================================================================================================== 00:28:52.358 Total : 104835.92 102.38 0.00 0.00 7303.18 1651.51 10747.90' 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-26 13:28:29.989216] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:52.358 [2024-07-26 13:28:29.989279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850984 ] 00:28:52.358 Using job config with 3 jobs 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:52.358 [2024-07-26 13:28:30.133244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.358 [2024-07-26 13:28:30.234680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.358 cpumask for '\''job0'\'' is too big 00:28:52.358 cpumask for '\''job1'\'' is too big 00:28:52.358 cpumask for '\''job2'\'' is too big 00:28:52.358 Running I/O for 2 seconds... 00:28:52.358 00:28:52.358 Latency(us) 00:28:52.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:52.358 Malloc0 : 2.01 34975.13 34.16 0.00 0.00 7311.66 1671.17 10747.90 00:28:52.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:52.358 Malloc0 : 2.01 34945.27 34.13 0.00 0.00 7303.38 1651.51 9070.18 00:28:52.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:52.358 Malloc0 : 2.02 34915.53 34.10 0.00 0.00 7294.51 1651.51 7654.60 00:28:52.358 =================================================================================================================== 00:28:52.358 Total : 104835.92 102.38 0.00 0.00 7303.18 1651.51 10747.90' 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-26 13:28:29.989216] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:52.358 [2024-07-26 13:28:29.989279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid850984 ] 00:28:52.358 Using job config with 3 jobs 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:52.358 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:52.358 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:52.358 [2024-07-26 13:28:30.133244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.358 [2024-07-26 13:28:30.234680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.358 cpumask for '\''job0'\'' is too big 00:28:52.358 cpumask for '\''job1'\'' is too big 00:28:52.358 cpumask for '\''job2'\'' is too big 00:28:52.358 Running I/O for 2 seconds... 00:28:52.358 00:28:52.358 Latency(us) 00:28:52.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:52.358 Malloc0 : 2.01 34975.13 34.16 0.00 0.00 7311.66 1671.17 10747.90 00:28:52.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:52.358 Malloc0 : 2.01 34945.27 34.13 0.00 0.00 7303.38 1651.51 9070.18 00:28:52.358 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:28:52.358 Malloc0 : 2.02 34915.53 34.10 0.00 0.00 7294.51 1651.51 7654.60 00:28:52.358 =================================================================================================================== 00:28:52.358 Total : 104835.92 102.38 0.00 0.00 7303.18 1651.51 10747.90' 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:52.358 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:28:52.358 13:28:32 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:52.359 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:52.359 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:52.359 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:28:52.359 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:28:52.359 13:28:32 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:28:54.895 13:28:35 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-26 13:28:32.716876] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:54.895 [2024-07-26 13:28:32.716938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851517 ] 00:28:54.895 Using job config with 4 jobs 00:28:54.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.895 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:54.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.895 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:54.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.895 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:54.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.895 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:54.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.895 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:54.895 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:54.896 [2024-07-26 13:28:32.864809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.896 [2024-07-26 13:28:32.970222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.896 cpumask for '\''job0'\'' is too big 00:28:54.896 cpumask for '\''job1'\'' is too big 00:28:54.896 cpumask for '\''job2'\'' is too big 00:28:54.896 cpumask for '\''job3'\'' is too big 00:28:54.896 Running I/O for 2 seconds... 00:28:54.896 00:28:54.896 Latency(us) 00:28:54.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.896 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.896 Malloc0 : 2.04 12923.48 12.62 0.00 0.00 19803.89 3512.73 30408.70 00:28:54.896 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.896 Malloc1 : 2.04 12912.34 12.61 0.00 0.00 19804.27 4272.95 30408.70 00:28:54.896 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.896 Malloc0 : 2.04 12901.54 12.60 0.00 0.00 19758.49 3460.30 27053.26 00:28:54.896 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.896 Malloc1 : 2.05 12890.41 12.59 0.00 0.00 19759.28 4246.73 27053.26 00:28:54.896 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.896 Malloc0 : 2.05 12879.64 12.58 0.00 0.00 19708.61 3460.30 23488.10 00:28:54.896 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.896 Malloc1 : 2.05 12868.61 12.57 0.00 0.00 19709.29 4299.16 23383.24 00:28:54.896 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.896 Malloc0 : 2.05 12857.79 12.56 0.00 0.00 19659.79 3460.30 20237.52 00:28:54.896 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.896 Malloc1 : 2.05 12846.83 12.55 0.00 0.00 19658.46 4246.73 20237.52 00:28:54.896 =================================================================================================================== 00:28:54.896 Total : 103080.64 100.66 0.00 0.00 19732.76 3460.30 30408.70' 00:28:54.896 13:28:35 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-26 13:28:32.716876] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:54.896 [2024-07-26 13:28:32.716938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851517 ] 00:28:54.896 Using job config with 4 jobs 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.896 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:54.896 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:54.897 [2024-07-26 13:28:32.864809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.897 [2024-07-26 13:28:32.970222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.897 cpumask for '\''job0'\'' is too big 00:28:54.897 cpumask for '\''job1'\'' is too big 00:28:54.897 cpumask for '\''job2'\'' is too big 00:28:54.897 cpumask for '\''job3'\'' is too big 00:28:54.897 Running I/O for 2 seconds... 00:28:54.897 00:28:54.897 Latency(us) 00:28:54.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.897 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.897 Malloc0 : 2.04 12923.48 12.62 0.00 0.00 19803.89 3512.73 30408.70 00:28:54.897 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.897 Malloc1 : 2.04 12912.34 12.61 0.00 0.00 19804.27 4272.95 30408.70 00:28:54.897 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.897 Malloc0 : 2.04 12901.54 12.60 0.00 0.00 19758.49 3460.30 27053.26 00:28:54.897 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.897 Malloc1 : 2.05 12890.41 12.59 0.00 0.00 19759.28 4246.73 27053.26 00:28:54.897 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.897 Malloc0 : 2.05 12879.64 12.58 0.00 0.00 19708.61 3460.30 23488.10 00:28:54.897 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.897 Malloc1 : 2.05 12868.61 12.57 0.00 0.00 19709.29 4299.16 23383.24 00:28:54.897 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.897 Malloc0 : 2.05 12857.79 12.56 0.00 0.00 19659.79 3460.30 20237.52 00:28:54.897 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.897 Malloc1 : 2.05 12846.83 12.55 0.00 0.00 19658.46 4246.73 20237.52 00:28:54.897 =================================================================================================================== 00:28:54.897 Total : 103080.64 100.66 0.00 0.00 19732.76 3460.30 30408.70' 00:28:54.897 13:28:35 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:28:54.897 13:28:35 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-26 13:28:32.716876] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:54.897 [2024-07-26 13:28:32.716938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid851517 ] 00:28:54.897 Using job config with 4 jobs 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:54.897 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:54.897 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:54.897 [2024-07-26 13:28:32.864809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.897 [2024-07-26 13:28:32.970222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.897 cpumask for '\''job0'\'' is too big 00:28:54.898 cpumask for '\''job1'\'' is too big 00:28:54.898 cpumask for '\''job2'\'' is too big 00:28:54.898 cpumask for '\''job3'\'' is too big 00:28:54.898 Running I/O for 2 seconds... 00:28:54.898 00:28:54.898 Latency(us) 00:28:54.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.898 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.898 Malloc0 : 2.04 12923.48 12.62 0.00 0.00 19803.89 3512.73 30408.70 00:28:54.898 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.898 Malloc1 : 2.04 12912.34 12.61 0.00 0.00 19804.27 4272.95 30408.70 00:28:54.898 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.898 Malloc0 : 2.04 12901.54 12.60 0.00 0.00 19758.49 3460.30 27053.26 00:28:54.898 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.898 Malloc1 : 2.05 12890.41 12.59 0.00 0.00 19759.28 4246.73 27053.26 00:28:54.898 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.898 Malloc0 : 2.05 12879.64 12.58 0.00 0.00 19708.61 3460.30 23488.10 00:28:54.898 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.898 Malloc1 : 2.05 12868.61 12.57 0.00 0.00 19709.29 4299.16 23383.24 00:28:54.898 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.898 Malloc0 : 2.05 12857.79 12.56 0.00 0.00 19659.79 3460.30 20237.52 00:28:54.898 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:28:54.898 Malloc1 : 2.05 12846.83 12.55 0.00 0.00 19658.46 4246.73 20237.52 00:28:54.898 =================================================================================================================== 00:28:54.898 Total : 103080.64 100.66 0.00 0.00 19732.76 3460.30 30408.70' 00:28:54.898 13:28:35 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:28:54.898 13:28:35 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:28:54.898 13:28:35 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:28:54.898 13:28:35 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:28:54.898 13:28:35 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:54.898 00:28:54.898 real 0m11.026s 00:28:54.898 user 0m9.758s 00:28:54.898 sys 0m1.124s 00:28:54.898 13:28:35 bdevperf_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:54.898 13:28:35 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:28:54.898 ************************************ 00:28:54.898 END TEST bdevperf_config 00:28:54.898 ************************************ 00:28:55.158 13:28:35 -- spdk/autotest.sh@196 -- # uname -s 00:28:55.158 13:28:35 -- spdk/autotest.sh@196 -- # [[ Linux == Linux ]] 00:28:55.158 13:28:35 -- spdk/autotest.sh@197 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:28:55.158 13:28:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:55.158 13:28:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:55.158 13:28:35 -- common/autotest_common.sh@10 -- # set +x 00:28:55.158 ************************************ 00:28:55.158 START TEST reactor_set_interrupt 00:28:55.158 ************************************ 00:28:55.158 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:28:55.158 * Looking for test storage... 00:28:55.158 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:55.158 13:28:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:28:55.158 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:28:55.158 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:55.158 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:55.158 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:28:55.158 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:55.158 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:28:55.158 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:28:55.158 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:28:55.158 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:28:55.158 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:28:55.158 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:28:55.158 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:28:55.158 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:28:55.158 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:28:55.158 13:28:35 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:28:55.158 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:28:55.158 13:28:35 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:28:55.158 #define SPDK_CONFIG_H 00:28:55.158 #define SPDK_CONFIG_APPS 1 00:28:55.158 #define SPDK_CONFIG_ARCH native 00:28:55.158 #undef SPDK_CONFIG_ASAN 00:28:55.158 #undef SPDK_CONFIG_AVAHI 00:28:55.158 #undef SPDK_CONFIG_CET 00:28:55.158 #define SPDK_CONFIG_COVERAGE 1 00:28:55.158 #define SPDK_CONFIG_CROSS_PREFIX 00:28:55.158 #define SPDK_CONFIG_CRYPTO 1 00:28:55.158 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:28:55.158 #undef SPDK_CONFIG_CUSTOMOCF 00:28:55.158 #undef SPDK_CONFIG_DAOS 00:28:55.158 #define SPDK_CONFIG_DAOS_DIR 00:28:55.158 #define SPDK_CONFIG_DEBUG 1 00:28:55.158 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:28:55.158 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:28:55.158 #define SPDK_CONFIG_DPDK_INC_DIR 00:28:55.158 #define SPDK_CONFIG_DPDK_LIB_DIR 00:28:55.158 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:28:55.158 #undef SPDK_CONFIG_DPDK_UADK 00:28:55.158 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:28:55.158 #define SPDK_CONFIG_EXAMPLES 1 00:28:55.158 #undef SPDK_CONFIG_FC 00:28:55.158 #define SPDK_CONFIG_FC_PATH 00:28:55.158 #define SPDK_CONFIG_FIO_PLUGIN 1 00:28:55.158 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:28:55.158 #undef SPDK_CONFIG_FUSE 00:28:55.158 #undef SPDK_CONFIG_FUZZER 00:28:55.158 #define SPDK_CONFIG_FUZZER_LIB 00:28:55.158 #undef SPDK_CONFIG_GOLANG 00:28:55.158 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:28:55.158 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:28:55.158 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:28:55.158 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:28:55.158 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:28:55.158 #undef SPDK_CONFIG_HAVE_LIBBSD 00:28:55.158 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:28:55.158 #define SPDK_CONFIG_IDXD 1 00:28:55.158 #define SPDK_CONFIG_IDXD_KERNEL 1 00:28:55.158 #define SPDK_CONFIG_IPSEC_MB 1 00:28:55.158 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:28:55.158 #define SPDK_CONFIG_ISAL 1 00:28:55.158 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:28:55.158 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:28:55.158 #define SPDK_CONFIG_LIBDIR 00:28:55.158 #undef SPDK_CONFIG_LTO 00:28:55.158 #define SPDK_CONFIG_MAX_LCORES 128 00:28:55.158 #define SPDK_CONFIG_NVME_CUSE 1 00:28:55.158 #undef SPDK_CONFIG_OCF 00:28:55.158 #define SPDK_CONFIG_OCF_PATH 00:28:55.158 #define SPDK_CONFIG_OPENSSL_PATH 00:28:55.158 #undef SPDK_CONFIG_PGO_CAPTURE 00:28:55.158 #define SPDK_CONFIG_PGO_DIR 00:28:55.158 #undef SPDK_CONFIG_PGO_USE 00:28:55.158 #define SPDK_CONFIG_PREFIX /usr/local 00:28:55.158 #undef SPDK_CONFIG_RAID5F 00:28:55.158 #undef SPDK_CONFIG_RBD 00:28:55.158 #define SPDK_CONFIG_RDMA 1 00:28:55.159 #define SPDK_CONFIG_RDMA_PROV verbs 00:28:55.159 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:28:55.159 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:28:55.159 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:28:55.159 #define SPDK_CONFIG_SHARED 1 00:28:55.159 #undef SPDK_CONFIG_SMA 00:28:55.159 #define SPDK_CONFIG_TESTS 1 00:28:55.159 #undef SPDK_CONFIG_TSAN 00:28:55.159 #define SPDK_CONFIG_UBLK 1 00:28:55.159 #define SPDK_CONFIG_UBSAN 1 00:28:55.159 #undef SPDK_CONFIG_UNIT_TESTS 00:28:55.159 #undef SPDK_CONFIG_URING 00:28:55.159 #define SPDK_CONFIG_URING_PATH 00:28:55.159 #undef SPDK_CONFIG_URING_ZNS 00:28:55.159 #undef SPDK_CONFIG_USDT 00:28:55.159 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:28:55.159 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:28:55.159 #undef SPDK_CONFIG_VFIO_USER 00:28:55.159 #define SPDK_CONFIG_VFIO_USER_DIR 00:28:55.159 #define SPDK_CONFIG_VHOST 1 00:28:55.159 #define SPDK_CONFIG_VIRTIO 1 00:28:55.159 #undef SPDK_CONFIG_VTUNE 00:28:55.159 #define SPDK_CONFIG_VTUNE_DIR 00:28:55.159 #define SPDK_CONFIG_WERROR 1 00:28:55.159 #define SPDK_CONFIG_WPDK_DIR 00:28:55.159 #undef SPDK_CONFIG_XNVME 00:28:55.159 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:28:55.159 13:28:35 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:28:55.159 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:28:55.159 13:28:35 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.159 13:28:35 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.159 13:28:35 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.159 13:28:35 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.159 13:28:35 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.159 13:28:35 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.159 13:28:35 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:28:55.159 13:28:35 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.159 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:28:55.159 13:28:35 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:28:55.159 13:28:35 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:28:55.159 13:28:35 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:28:55.159 13:28:35 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:28:55.159 13:28:35 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:28:55.159 13:28:35 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:28:55.159 13:28:35 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:28:55.159 13:28:35 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:28:55.159 13:28:35 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:28:55.420 13:28:35 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 1 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:28:55.420 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@202 -- # cat 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export valgrind= 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@265 -- # valgrind= 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@271 -- # uname -s 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@281 -- # MAKE=make 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@301 -- # TEST_MODE= 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@320 -- # [[ -z 851993 ]] 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@320 -- # kill -0 851993 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local mount target_dir 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.6gf4cp 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.6gf4cp/tests/interrupt /tmp/spdk.6gf4cp 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@329 -- # df -T 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:28:55.421 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=954302464 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4330127360 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=55088648192 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742305280 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=6653657088 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30866341888 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871150592 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4808704 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=12338663424 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348461056 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=9797632 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30869889024 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871154688 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=1265664 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=6174224384 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174228480 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:28:55.422 * Looking for test storage... 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@370 -- # local target_space new_size 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mount=/ 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@376 -- # target_space=55088648192 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@383 -- # new_size=8868249600 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:55.422 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@391 -- # return 0 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=852120 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 852120 /var/tmp/spdk.sock 00:28:55.422 13:28:35 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 852120 ']' 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:55.422 13:28:35 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:28:55.422 [2024-07-26 13:28:35.824589] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:28:55.422 [2024-07-26 13:28:35.824648] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid852120 ] 00:28:55.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.422 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:55.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.422 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:55.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.422 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:55.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.422 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:55.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.422 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:55.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.422 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:55.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.422 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:55.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.422 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:55.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.422 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:55.423 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:55.423 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:55.682 [2024-07-26 13:28:35.947151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:55.682 [2024-07-26 13:28:36.035795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.682 [2024-07-26 13:28:36.035891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.682 [2024-07-26 13:28:36.035894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.682 [2024-07-26 13:28:36.103946] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:56.250 13:28:36 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:56.250 13:28:36 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:28:56.250 13:28:36 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:28:56.250 13:28:36 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:56.510 Malloc0 00:28:56.510 Malloc1 00:28:56.510 Malloc2 00:28:56.510 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:28:56.510 13:28:37 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:28:56.510 13:28:37 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:28:56.510 13:28:37 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:28:56.769 5000+0 records in 00:28:56.769 5000+0 records out 00:28:56.769 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0260275 s, 393 MB/s 00:28:56.769 13:28:37 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:28:56.769 AIO0 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 852120 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 852120 without_thd 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=852120 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:28:57.028 13:28:37 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:28:57.287 spdk_thread ids are 1 on reactor0. 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 852120 0 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 852120 0 idle 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852120 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852120 -w 256 00:28:57.287 13:28:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852120 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.36 reactor_0' 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852120 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.36 reactor_0 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 852120 1 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 852120 1 idle 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852120 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852120 -w 256 00:28:57.547 13:28:37 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:28:57.547 13:28:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852123 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1' 00:28:57.547 13:28:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852123 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1 00:28:57.547 13:28:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:57.547 13:28:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 852120 2 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 852120 2 idle 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852120 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852120 -w 256 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852124 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2' 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852124 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:28:57.806 13:28:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:28:58.065 [2024-07-26 13:28:38.468737] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:58.065 13:28:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:28:58.325 [2024-07-26 13:28:38.704394] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:28:58.325 [2024-07-26 13:28:38.704628] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:58.325 13:28:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:28:58.584 [2024-07-26 13:28:38.928386] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:28:58.584 [2024-07-26 13:28:38.928550] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:58.584 13:28:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:58.584 13:28:38 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 852120 0 00:28:58.584 13:28:38 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 852120 0 busy 00:28:58.584 13:28:38 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852120 00:28:58.584 13:28:38 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:58.584 13:28:38 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:28:58.584 13:28:38 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:28:58.584 13:28:38 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:58.584 13:28:38 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:58.584 13:28:38 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:58.584 13:28:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852120 -w 256 00:28:58.584 13:28:38 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852120 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.77 reactor_0' 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852120 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.77 reactor_0 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 852120 2 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 852120 2 busy 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852120 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852120 -w 256 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852124 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.36 reactor_2' 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852124 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.36 reactor_2 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:58.843 13:28:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:28:59.103 [2024-07-26 13:28:39.516371] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:28:59.103 [2024-07-26 13:28:39.516460] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 852120 2 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 852120 2 idle 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852120 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:28:59.103 13:28:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852120 -w 256 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852124 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.58 reactor_2' 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852124 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.58 reactor_2 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:28:59.362 [2024-07-26 13:28:39.868369] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:28:59.362 [2024-07-26 13:28:39.868511] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:28:59.362 13:28:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:28:59.621 [2024-07-26 13:28:40.100582] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:59.621 13:28:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 852120 0 00:28:59.621 13:28:40 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 852120 0 idle 00:28:59.621 13:28:40 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852120 00:28:59.621 13:28:40 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:28:59.621 13:28:40 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:28:59.621 13:28:40 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:28:59.621 13:28:40 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:28:59.621 13:28:40 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:28:59.621 13:28:40 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:28:59.621 13:28:40 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:28:59.621 13:28:40 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852120 -w 256 00:28:59.621 13:28:40 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852120 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.53 reactor_0' 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852120 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.53 reactor_0 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:28:59.881 13:28:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 852120 00:28:59.881 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 852120 ']' 00:28:59.881 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 852120 00:28:59.881 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:28:59.881 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:59.881 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 852120 00:28:59.881 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:59.881 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:59.881 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 852120' 00:28:59.881 killing process with pid 852120 00:28:59.881 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 852120 00:28:59.881 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 852120 00:29:00.140 13:28:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:29:00.140 13:28:40 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:29:00.140 13:28:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:29:00.140 13:28:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.140 13:28:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:29:00.140 13:28:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=852987 00:29:00.140 13:28:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:00.140 13:28:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:00.140 13:28:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 852987 /var/tmp/spdk.sock 00:29:00.140 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 852987 ']' 00:29:00.140 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.140 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:00.140 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.140 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:00.140 13:28:40 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:00.140 [2024-07-26 13:28:40.608250] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:29:00.140 [2024-07-26 13:28:40.608306] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid852987 ] 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:00.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:00.140 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:00.400 [2024-07-26 13:28:40.716545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:00.400 [2024-07-26 13:28:40.804839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.400 [2024-07-26 13:28:40.804860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.400 [2024-07-26 13:28:40.804863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.400 [2024-07-26 13:28:40.872861] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:00.968 13:28:41 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:00.968 13:28:41 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:29:00.968 13:28:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:29:00.968 13:28:41 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:01.228 Malloc0 00:29:01.228 Malloc1 00:29:01.228 Malloc2 00:29:01.228 13:28:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:29:01.228 13:28:41 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:29:01.228 13:28:41 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:01.228 13:28:41 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:01.487 5000+0 records in 00:29:01.487 5000+0 records out 00:29:01.487 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0268811 s, 381 MB/s 00:29:01.487 13:28:41 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:29:01.487 AIO0 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 852987 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 852987 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=852987 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:29:01.746 13:28:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:29:02.006 spdk_thread ids are 1 on reactor0. 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 852987 0 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 852987 0 idle 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852987 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852987 -w 256 00:29:02.006 13:28:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852987 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.34 reactor_0' 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852987 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.34 reactor_0 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 852987 1 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 852987 1 idle 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852987 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852987 -w 256 00:29:02.265 13:28:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852990 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1' 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852990 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_1 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 852987 2 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 852987 2 idle 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852987 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852987 -w 256 00:29:02.525 13:28:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:02.525 13:28:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852991 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2' 00:29:02.525 13:28:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852991 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.00 reactor_2 00:29:02.525 13:28:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:02.525 13:28:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:02.525 13:28:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:02.525 13:28:43 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:02.525 13:28:43 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:02.525 13:28:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:02.525 13:28:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:02.525 13:28:43 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:02.525 13:28:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:29:02.525 13:28:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:29:02.785 [2024-07-26 13:28:43.261457] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:29:02.785 [2024-07-26 13:28:43.261647] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:29:02.785 [2024-07-26 13:28:43.261722] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:02.785 13:28:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:29:03.044 [2024-07-26 13:28:43.489947] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:29:03.044 [2024-07-26 13:28:43.490085] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:03.044 13:28:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:03.044 13:28:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 852987 0 00:29:03.044 13:28:43 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 852987 0 busy 00:29:03.044 13:28:43 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852987 00:29:03.044 13:28:43 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:03.044 13:28:43 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:03.044 13:28:43 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:29:03.044 13:28:43 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:03.044 13:28:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:03.044 13:28:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:03.044 13:28:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852987 -w 256 00:29:03.044 13:28:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852987 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.77 reactor_0' 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852987 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.77 reactor_0 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 852987 2 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 852987 2 busy 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852987 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852987 -w 256 00:29:03.304 13:28:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:03.563 13:28:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852991 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.37 reactor_2' 00:29:03.563 13:28:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852991 root 20 0 128.2g 35840 23296 R 99.9 0.1 0:00.37 reactor_2 00:29:03.563 13:28:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:03.563 13:28:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:03.563 13:28:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:29:03.563 13:28:43 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:29:03.563 13:28:43 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:29:03.563 13:28:43 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:29:03.563 13:28:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:29:03.563 13:28:43 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:03.563 13:28:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:29:03.823 [2024-07-26 13:28:44.095664] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:29:03.823 [2024-07-26 13:28:44.095762] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 852987 2 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 852987 2 idle 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852987 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852987 -w 256 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852991 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.60 reactor_2' 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852991 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:00.60 reactor_2 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:03.823 13:28:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:29:04.083 [2024-07-26 13:28:44.508712] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:29:04.083 [2024-07-26 13:28:44.508842] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:29:04.083 [2024-07-26 13:28:44.508865] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 852987 0 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 852987 0 idle 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=852987 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 852987 -w 256 00:29:04.083 13:28:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 852987 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.59 reactor_0' 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 852987 root 20 0 128.2g 35840 23296 S 0.0 0.1 0:01.59 reactor_0 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:04.343 13:28:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 852987 00:29:04.343 13:28:44 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 852987 ']' 00:29:04.343 13:28:44 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 852987 00:29:04.343 13:28:44 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:29:04.343 13:28:44 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:04.343 13:28:44 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 852987 00:29:04.343 13:28:44 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:04.343 13:28:44 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:04.343 13:28:44 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 852987' 00:29:04.343 killing process with pid 852987 00:29:04.343 13:28:44 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 852987 00:29:04.343 13:28:44 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 852987 00:29:04.603 13:28:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:29:04.603 13:28:45 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:29:04.603 00:29:04.603 real 0m9.524s 00:29:04.603 user 0m8.857s 00:29:04.603 sys 0m1.983s 00:29:04.603 13:28:45 reactor_set_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:04.603 13:28:45 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:04.603 ************************************ 00:29:04.603 END TEST reactor_set_interrupt 00:29:04.603 ************************************ 00:29:04.603 13:28:45 -- spdk/autotest.sh@198 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:04.603 13:28:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:04.603 13:28:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:04.603 13:28:45 -- common/autotest_common.sh@10 -- # set +x 00:29:04.603 ************************************ 00:29:04.603 START TEST reap_unregistered_poller 00:29:04.603 ************************************ 00:29:04.603 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:04.865 * Looking for test storage... 00:29:04.865 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:04.865 13:28:45 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:29:04.865 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:29:04.865 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:04.865 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:04.865 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:29:04.865 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:04.865 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:29:04.865 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:29:04.865 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:29:04.865 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:29:04.865 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:29:04.865 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:29:04.865 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:29:04.865 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:29:04.865 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:29:04.865 13:28:45 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:29:04.866 13:28:45 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:29:04.866 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:29:04.866 13:28:45 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:29:04.866 #define SPDK_CONFIG_H 00:29:04.866 #define SPDK_CONFIG_APPS 1 00:29:04.866 #define SPDK_CONFIG_ARCH native 00:29:04.866 #undef SPDK_CONFIG_ASAN 00:29:04.866 #undef SPDK_CONFIG_AVAHI 00:29:04.866 #undef SPDK_CONFIG_CET 00:29:04.866 #define SPDK_CONFIG_COVERAGE 1 00:29:04.866 #define SPDK_CONFIG_CROSS_PREFIX 00:29:04.866 #define SPDK_CONFIG_CRYPTO 1 00:29:04.866 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:29:04.866 #undef SPDK_CONFIG_CUSTOMOCF 00:29:04.866 #undef SPDK_CONFIG_DAOS 00:29:04.866 #define SPDK_CONFIG_DAOS_DIR 00:29:04.866 #define SPDK_CONFIG_DEBUG 1 00:29:04.866 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:29:04.866 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:29:04.866 #define SPDK_CONFIG_DPDK_INC_DIR 00:29:04.866 #define SPDK_CONFIG_DPDK_LIB_DIR 00:29:04.866 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:29:04.866 #undef SPDK_CONFIG_DPDK_UADK 00:29:04.866 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:29:04.866 #define SPDK_CONFIG_EXAMPLES 1 00:29:04.866 #undef SPDK_CONFIG_FC 00:29:04.866 #define SPDK_CONFIG_FC_PATH 00:29:04.866 #define SPDK_CONFIG_FIO_PLUGIN 1 00:29:04.866 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:29:04.866 #undef SPDK_CONFIG_FUSE 00:29:04.866 #undef SPDK_CONFIG_FUZZER 00:29:04.866 #define SPDK_CONFIG_FUZZER_LIB 00:29:04.866 #undef SPDK_CONFIG_GOLANG 00:29:04.866 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:29:04.866 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:29:04.866 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:29:04.866 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:29:04.866 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:29:04.866 #undef SPDK_CONFIG_HAVE_LIBBSD 00:29:04.866 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:29:04.866 #define SPDK_CONFIG_IDXD 1 00:29:04.866 #define SPDK_CONFIG_IDXD_KERNEL 1 00:29:04.866 #define SPDK_CONFIG_IPSEC_MB 1 00:29:04.866 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:29:04.867 #define SPDK_CONFIG_ISAL 1 00:29:04.867 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:29:04.867 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:29:04.867 #define SPDK_CONFIG_LIBDIR 00:29:04.867 #undef SPDK_CONFIG_LTO 00:29:04.867 #define SPDK_CONFIG_MAX_LCORES 128 00:29:04.867 #define SPDK_CONFIG_NVME_CUSE 1 00:29:04.867 #undef SPDK_CONFIG_OCF 00:29:04.867 #define SPDK_CONFIG_OCF_PATH 00:29:04.867 #define SPDK_CONFIG_OPENSSL_PATH 00:29:04.867 #undef SPDK_CONFIG_PGO_CAPTURE 00:29:04.867 #define SPDK_CONFIG_PGO_DIR 00:29:04.867 #undef SPDK_CONFIG_PGO_USE 00:29:04.867 #define SPDK_CONFIG_PREFIX /usr/local 00:29:04.867 #undef SPDK_CONFIG_RAID5F 00:29:04.867 #undef SPDK_CONFIG_RBD 00:29:04.867 #define SPDK_CONFIG_RDMA 1 00:29:04.867 #define SPDK_CONFIG_RDMA_PROV verbs 00:29:04.867 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:29:04.867 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:29:04.867 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:29:04.867 #define SPDK_CONFIG_SHARED 1 00:29:04.867 #undef SPDK_CONFIG_SMA 00:29:04.867 #define SPDK_CONFIG_TESTS 1 00:29:04.867 #undef SPDK_CONFIG_TSAN 00:29:04.867 #define SPDK_CONFIG_UBLK 1 00:29:04.867 #define SPDK_CONFIG_UBSAN 1 00:29:04.867 #undef SPDK_CONFIG_UNIT_TESTS 00:29:04.867 #undef SPDK_CONFIG_URING 00:29:04.867 #define SPDK_CONFIG_URING_PATH 00:29:04.867 #undef SPDK_CONFIG_URING_ZNS 00:29:04.867 #undef SPDK_CONFIG_USDT 00:29:04.867 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:29:04.867 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:29:04.867 #undef SPDK_CONFIG_VFIO_USER 00:29:04.867 #define SPDK_CONFIG_VFIO_USER_DIR 00:29:04.867 #define SPDK_CONFIG_VHOST 1 00:29:04.867 #define SPDK_CONFIG_VIRTIO 1 00:29:04.867 #undef SPDK_CONFIG_VTUNE 00:29:04.867 #define SPDK_CONFIG_VTUNE_DIR 00:29:04.867 #define SPDK_CONFIG_WERROR 1 00:29:04.867 #define SPDK_CONFIG_WPDK_DIR 00:29:04.867 #undef SPDK_CONFIG_XNVME 00:29:04.867 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:29:04.867 13:28:45 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:29:04.867 13:28:45 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:04.867 13:28:45 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:04.867 13:28:45 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:04.867 13:28:45 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.867 13:28:45 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.867 13:28:45 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.867 13:28:45 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:29:04.867 13:28:45 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:29:04.867 13:28:45 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:29:04.867 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 1 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:29:04.868 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@202 -- # cat 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export valgrind= 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@265 -- # valgrind= 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@271 -- # uname -s 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@281 -- # MAKE=make 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@301 -- # TEST_MODE= 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@320 -- # [[ -z 853815 ]] 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@320 -- # kill -0 853815 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local mount target_dir 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.aXS3OT 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.aXS3OT/tests/interrupt /tmp/spdk.aXS3OT 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@329 -- # df -T 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=954302464 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4330127360 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=55088480256 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742305280 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=6653825024 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30866341888 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871150592 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4808704 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=12338663424 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348461056 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=9797632 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30869889024 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871154688 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=1265664 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:29:04.869 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:29:04.870 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=6174224384 00:29:04.870 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174228480 00:29:04.870 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:29:04.870 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:29:04.870 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:29:04.870 * Looking for test storage... 00:29:04.870 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@370 -- # local target_space new_size 00:29:04.870 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:29:04.870 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:04.870 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mount=/ 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@376 -- # target_space=55088480256 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@383 -- # new_size=8868417536 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:05.130 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@391 -- # return 0 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=853925 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:29:05.130 13:28:45 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 853925 /var/tmp/spdk.sock 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@831 -- # '[' -z 853925 ']' 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.130 13:28:45 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:29:05.130 [2024-07-26 13:28:45.442785] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:29:05.130 [2024-07-26 13:28:45.442847] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853925 ] 00:29:05.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.130 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:05.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.130 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:05.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.130 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:05.130 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.130 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:05.131 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:05.131 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:05.131 [2024-07-26 13:28:45.574463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:05.390 [2024-07-26 13:28:45.662209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.390 [2024-07-26 13:28:45.662305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.390 [2024-07-26 13:28:45.662309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.390 [2024-07-26 13:28:45.731470] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:06.022 13:28:46 reap_unregistered_poller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:06.022 13:28:46 reap_unregistered_poller -- common/autotest_common.sh@864 -- # return 0 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:29:06.022 13:28:46 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.022 13:28:46 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:29:06.022 13:28:46 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:29:06.022 "name": "app_thread", 00:29:06.022 "id": 1, 00:29:06.022 "active_pollers": [], 00:29:06.022 "timed_pollers": [ 00:29:06.022 { 00:29:06.022 "name": "rpc_subsystem_poll_servers", 00:29:06.022 "id": 1, 00:29:06.022 "state": "waiting", 00:29:06.022 "run_count": 0, 00:29:06.022 "busy_count": 0, 00:29:06.022 "period_ticks": 10000000 00:29:06.022 } 00:29:06.022 ], 00:29:06.022 "paused_pollers": [] 00:29:06.022 }' 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:29:06.022 5000+0 records in 00:29:06.022 5000+0 records out 00:29:06.022 10240000 bytes (10 MB, 9.8 MiB) copied, 0.026821 s, 382 MB/s 00:29:06.022 13:28:46 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:29:06.281 AIO0 00:29:06.281 13:28:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:06.541 13:28:46 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:29:06.541 13:28:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:29:06.541 13:28:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:29:06.541 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.541 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:29:06.541 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.800 13:28:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:29:06.800 "name": "app_thread", 00:29:06.800 "id": 1, 00:29:06.800 "active_pollers": [], 00:29:06.800 "timed_pollers": [ 00:29:06.800 { 00:29:06.800 "name": "rpc_subsystem_poll_servers", 00:29:06.800 "id": 1, 00:29:06.800 "state": "waiting", 00:29:06.800 "run_count": 0, 00:29:06.800 "busy_count": 0, 00:29:06.800 "period_ticks": 10000000 00:29:06.800 } 00:29:06.800 ], 00:29:06.800 "paused_pollers": [] 00:29:06.800 }' 00:29:06.800 13:28:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:29:06.800 13:28:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:29:06.800 13:28:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:29:06.800 13:28:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:29:06.800 13:28:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:29:06.800 13:28:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:29:06.800 13:28:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:29:06.800 13:28:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 853925 00:29:06.800 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@950 -- # '[' -z 853925 ']' 00:29:06.800 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@954 -- # kill -0 853925 00:29:06.800 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@955 -- # uname 00:29:06.800 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:06.800 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 853925 00:29:06.801 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:06.801 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:06.801 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 853925' 00:29:06.801 killing process with pid 853925 00:29:06.801 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@969 -- # kill 853925 00:29:06.801 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@974 -- # wait 853925 00:29:07.060 13:28:47 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:29:07.061 13:28:47 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:29:07.061 00:29:07.061 real 0m2.335s 00:29:07.061 user 0m1.418s 00:29:07.061 sys 0m0.642s 00:29:07.061 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:07.061 13:28:47 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:29:07.061 ************************************ 00:29:07.061 END TEST reap_unregistered_poller 00:29:07.061 ************************************ 00:29:07.061 13:28:47 -- spdk/autotest.sh@202 -- # uname -s 00:29:07.061 13:28:47 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:29:07.061 13:28:47 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:29:07.061 13:28:47 -- spdk/autotest.sh@209 -- # [[ 1 -eq 0 ]] 00:29:07.061 13:28:47 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@264 -- # timing_exit lib 00:29:07.061 13:28:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:07.061 13:28:47 -- common/autotest_common.sh@10 -- # set +x 00:29:07.061 13:28:47 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@351 -- # '[' 1 -eq 1 ']' 00:29:07.061 13:28:47 -- spdk/autotest.sh@352 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:29:07.061 13:28:47 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:07.061 13:28:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:07.061 13:28:47 -- common/autotest_common.sh@10 -- # set +x 00:29:07.061 ************************************ 00:29:07.061 START TEST compress_compdev 00:29:07.061 ************************************ 00:29:07.061 13:28:47 compress_compdev -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:29:07.321 * Looking for test storage... 00:29:07.321 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:29:07.321 13:28:47 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:29:07.321 13:28:47 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.321 13:28:47 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.321 13:28:47 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.321 13:28:47 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.321 13:28:47 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.321 13:28:47 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.321 13:28:47 compress_compdev -- paths/export.sh@5 -- # export PATH 00:29:07.321 13:28:47 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:07.321 13:28:47 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:07.321 13:28:47 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:07.321 13:28:47 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:29:07.321 13:28:47 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:29:07.321 13:28:47 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:29:07.321 13:28:47 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:29:07.321 13:28:47 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=854289 00:29:07.321 13:28:47 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:07.321 13:28:47 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 854289 00:29:07.321 13:28:47 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 854289 ']' 00:29:07.321 13:28:47 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:29:07.321 13:28:47 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.321 13:28:47 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:07.321 13:28:47 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.321 13:28:47 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:07.321 13:28:47 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:07.321 [2024-07-26 13:28:47.772553] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:29:07.321 [2024-07-26 13:28:47.772596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854289 ] 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:07.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.321 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:07.322 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:07.322 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:07.581 [2024-07-26 13:28:47.877532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:07.581 [2024-07-26 13:28:47.961243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.581 [2024-07-26 13:28:47.961248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.150 [2024-07-26 13:28:48.637112] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:29:08.468 13:28:48 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:08.469 13:28:48 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:29:08.469 13:28:48 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:29:08.469 13:28:48 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:08.469 13:28:48 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:11.762 [2024-07-26 13:28:51.783888] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1f72f00 PMD being used: compress_qat 00:29:11.763 13:28:51 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:11.763 13:28:51 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:11.763 13:28:51 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:11.763 13:28:51 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:11.763 13:28:51 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:11.763 13:28:51 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:11.763 13:28:51 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:11.763 13:28:52 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:11.763 [ 00:29:11.763 { 00:29:11.763 "name": "Nvme0n1", 00:29:11.763 "aliases": [ 00:29:11.763 "2d29a547-c8d8-4c87-9d3a-30fe442bcc30" 00:29:11.763 ], 00:29:11.763 "product_name": "NVMe disk", 00:29:11.763 "block_size": 512, 00:29:11.763 "num_blocks": 3907029168, 00:29:11.763 "uuid": "2d29a547-c8d8-4c87-9d3a-30fe442bcc30", 00:29:11.763 "assigned_rate_limits": { 00:29:11.763 "rw_ios_per_sec": 0, 00:29:11.763 "rw_mbytes_per_sec": 0, 00:29:11.763 "r_mbytes_per_sec": 0, 00:29:11.763 "w_mbytes_per_sec": 0 00:29:11.763 }, 00:29:11.763 "claimed": false, 00:29:11.763 "zoned": false, 00:29:11.763 "supported_io_types": { 00:29:11.763 "read": true, 00:29:11.763 "write": true, 00:29:11.763 "unmap": true, 00:29:11.763 "flush": true, 00:29:11.763 "reset": true, 00:29:11.763 "nvme_admin": true, 00:29:11.763 "nvme_io": true, 00:29:11.763 "nvme_io_md": false, 00:29:11.763 "write_zeroes": true, 00:29:11.763 "zcopy": false, 00:29:11.763 "get_zone_info": false, 00:29:11.763 "zone_management": false, 00:29:11.763 "zone_append": false, 00:29:11.763 "compare": false, 00:29:11.763 "compare_and_write": false, 00:29:11.763 "abort": true, 00:29:11.763 "seek_hole": false, 00:29:11.763 "seek_data": false, 00:29:11.763 "copy": false, 00:29:11.763 "nvme_iov_md": false 00:29:11.763 }, 00:29:11.763 "driver_specific": { 00:29:11.763 "nvme": [ 00:29:11.763 { 00:29:11.763 "pci_address": "0000:d8:00.0", 00:29:11.763 "trid": { 00:29:11.763 "trtype": "PCIe", 00:29:11.763 "traddr": "0000:d8:00.0" 00:29:11.763 }, 00:29:11.763 "ctrlr_data": { 00:29:11.763 "cntlid": 0, 00:29:11.763 "vendor_id": "0x8086", 00:29:11.763 "model_number": "INTEL SSDPE2KX020T8", 00:29:11.763 "serial_number": "BTLJ125505KA2P0BGN", 00:29:11.763 "firmware_revision": "VDV10170", 00:29:11.763 "oacs": { 00:29:11.763 "security": 0, 00:29:11.763 "format": 1, 00:29:11.763 "firmware": 1, 00:29:11.763 "ns_manage": 1 00:29:11.763 }, 00:29:11.763 "multi_ctrlr": false, 00:29:11.763 "ana_reporting": false 00:29:11.763 }, 00:29:11.763 "vs": { 00:29:11.763 "nvme_version": "1.2" 00:29:11.763 }, 00:29:11.763 "ns_data": { 00:29:11.763 "id": 1, 00:29:11.763 "can_share": false 00:29:11.763 } 00:29:11.763 } 00:29:11.763 ], 00:29:11.763 "mp_policy": "active_passive" 00:29:11.763 } 00:29:11.763 } 00:29:11.763 ] 00:29:11.763 13:28:52 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:11.763 13:28:52 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:12.022 [2024-07-26 13:28:52.473016] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1daa140 PMD being used: compress_qat 00:29:13.400 555954d7-8e27-41f0-9b5e-3cb38b5f25fa 00:29:13.400 13:28:53 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:13.400 adb10bb8-3703-4de4-9572-6ad89fe53257 00:29:13.400 13:28:53 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:13.400 13:28:53 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:13.400 13:28:53 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:13.400 13:28:53 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:13.400 13:28:53 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:13.400 13:28:53 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:13.400 13:28:53 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:13.659 13:28:53 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:13.919 [ 00:29:13.919 { 00:29:13.919 "name": "adb10bb8-3703-4de4-9572-6ad89fe53257", 00:29:13.919 "aliases": [ 00:29:13.919 "lvs0/lv0" 00:29:13.919 ], 00:29:13.919 "product_name": "Logical Volume", 00:29:13.919 "block_size": 512, 00:29:13.919 "num_blocks": 204800, 00:29:13.919 "uuid": "adb10bb8-3703-4de4-9572-6ad89fe53257", 00:29:13.919 "assigned_rate_limits": { 00:29:13.919 "rw_ios_per_sec": 0, 00:29:13.919 "rw_mbytes_per_sec": 0, 00:29:13.919 "r_mbytes_per_sec": 0, 00:29:13.919 "w_mbytes_per_sec": 0 00:29:13.919 }, 00:29:13.919 "claimed": false, 00:29:13.919 "zoned": false, 00:29:13.919 "supported_io_types": { 00:29:13.919 "read": true, 00:29:13.919 "write": true, 00:29:13.919 "unmap": true, 00:29:13.919 "flush": false, 00:29:13.919 "reset": true, 00:29:13.919 "nvme_admin": false, 00:29:13.919 "nvme_io": false, 00:29:13.919 "nvme_io_md": false, 00:29:13.919 "write_zeroes": true, 00:29:13.919 "zcopy": false, 00:29:13.919 "get_zone_info": false, 00:29:13.919 "zone_management": false, 00:29:13.919 "zone_append": false, 00:29:13.919 "compare": false, 00:29:13.919 "compare_and_write": false, 00:29:13.919 "abort": false, 00:29:13.919 "seek_hole": true, 00:29:13.919 "seek_data": true, 00:29:13.919 "copy": false, 00:29:13.919 "nvme_iov_md": false 00:29:13.919 }, 00:29:13.919 "driver_specific": { 00:29:13.919 "lvol": { 00:29:13.919 "lvol_store_uuid": "555954d7-8e27-41f0-9b5e-3cb38b5f25fa", 00:29:13.919 "base_bdev": "Nvme0n1", 00:29:13.919 "thin_provision": true, 00:29:13.919 "num_allocated_clusters": 0, 00:29:13.919 "snapshot": false, 00:29:13.919 "clone": false, 00:29:13.919 "esnap_clone": false 00:29:13.919 } 00:29:13.919 } 00:29:13.919 } 00:29:13.919 ] 00:29:13.919 13:28:54 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:13.919 13:28:54 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:29:13.919 13:28:54 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:29:13.919 [2024-07-26 13:28:54.434107] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:13.919 COMP_lvs0/lv0 00:29:14.179 13:28:54 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:14.179 13:28:54 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:14.179 13:28:54 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:14.179 13:28:54 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:14.179 13:28:54 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:14.179 13:28:54 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:14.179 13:28:54 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:14.179 13:28:54 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:14.438 [ 00:29:14.439 { 00:29:14.439 "name": "COMP_lvs0/lv0", 00:29:14.439 "aliases": [ 00:29:14.439 "f6f94408-b470-51c1-a5b4-e0b20f6e107e" 00:29:14.439 ], 00:29:14.439 "product_name": "compress", 00:29:14.439 "block_size": 512, 00:29:14.439 "num_blocks": 200704, 00:29:14.439 "uuid": "f6f94408-b470-51c1-a5b4-e0b20f6e107e", 00:29:14.439 "assigned_rate_limits": { 00:29:14.439 "rw_ios_per_sec": 0, 00:29:14.439 "rw_mbytes_per_sec": 0, 00:29:14.439 "r_mbytes_per_sec": 0, 00:29:14.439 "w_mbytes_per_sec": 0 00:29:14.439 }, 00:29:14.439 "claimed": false, 00:29:14.439 "zoned": false, 00:29:14.439 "supported_io_types": { 00:29:14.439 "read": true, 00:29:14.439 "write": true, 00:29:14.439 "unmap": false, 00:29:14.439 "flush": false, 00:29:14.439 "reset": false, 00:29:14.439 "nvme_admin": false, 00:29:14.439 "nvme_io": false, 00:29:14.439 "nvme_io_md": false, 00:29:14.439 "write_zeroes": true, 00:29:14.439 "zcopy": false, 00:29:14.439 "get_zone_info": false, 00:29:14.439 "zone_management": false, 00:29:14.439 "zone_append": false, 00:29:14.439 "compare": false, 00:29:14.439 "compare_and_write": false, 00:29:14.439 "abort": false, 00:29:14.439 "seek_hole": false, 00:29:14.439 "seek_data": false, 00:29:14.439 "copy": false, 00:29:14.439 "nvme_iov_md": false 00:29:14.439 }, 00:29:14.439 "driver_specific": { 00:29:14.439 "compress": { 00:29:14.439 "name": "COMP_lvs0/lv0", 00:29:14.439 "base_bdev_name": "adb10bb8-3703-4de4-9572-6ad89fe53257", 00:29:14.439 "pm_path": "/tmp/pmem/cef27773-c508-49ea-ac3b-8bff9b15df7c" 00:29:14.439 } 00:29:14.439 } 00:29:14.439 } 00:29:14.439 ] 00:29:14.439 13:28:54 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:14.439 13:28:54 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:14.439 [2024-07-26 13:28:54.964300] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f35b41b15c0 PMD being used: compress_qat 00:29:14.698 [2024-07-26 13:28:54.966395] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1f6f7e0 PMD being used: compress_qat 00:29:14.698 Running I/O for 3 seconds... 00:29:17.987 00:29:17.987 Latency(us) 00:29:17.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.987 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:29:17.987 Verification LBA range: start 0x0 length 0x3100 00:29:17.987 COMP_lvs0/lv0 : 3.01 4130.88 16.14 0.00 0.00 7691.50 131.07 13369.34 00:29:17.987 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:29:17.987 Verification LBA range: start 0x3100 length 0x3100 00:29:17.987 COMP_lvs0/lv0 : 3.00 4241.23 16.57 0.00 0.00 7510.55 122.06 13946.06 00:29:17.987 =================================================================================================================== 00:29:17.987 Total : 8372.11 32.70 0.00 0.00 7599.89 122.06 13946.06 00:29:17.987 0 00:29:17.987 13:28:57 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:29:17.987 13:28:57 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:17.987 13:28:58 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:17.987 13:28:58 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:17.987 13:28:58 compress_compdev -- compress/compress.sh@78 -- # killprocess 854289 00:29:17.988 13:28:58 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 854289 ']' 00:29:17.988 13:28:58 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 854289 00:29:17.988 13:28:58 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:29:17.988 13:28:58 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:17.988 13:28:58 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 854289 00:29:18.247 13:28:58 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:18.247 13:28:58 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:18.247 13:28:58 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 854289' 00:29:18.247 killing process with pid 854289 00:29:18.247 13:28:58 compress_compdev -- common/autotest_common.sh@969 -- # kill 854289 00:29:18.247 Received shutdown signal, test time was about 3.000000 seconds 00:29:18.247 00:29:18.247 Latency(us) 00:29:18.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.247 =================================================================================================================== 00:29:18.247 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.247 13:28:58 compress_compdev -- common/autotest_common.sh@974 -- # wait 854289 00:29:20.780 13:29:00 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:29:20.780 13:29:00 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:29:20.780 13:29:00 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=856446 00:29:20.780 13:29:00 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:20.780 13:29:00 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 856446 00:29:20.780 13:29:00 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 856446 ']' 00:29:20.780 13:29:00 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.780 13:29:00 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:20.780 13:29:00 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:29:20.780 13:29:00 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.780 13:29:00 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:20.780 13:29:00 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:20.780 [2024-07-26 13:29:01.043329] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:29:20.780 [2024-07-26 13:29:01.043394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856446 ] 00:29:20.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.780 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:20.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.780 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:20.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.780 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:20.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.780 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:20.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.780 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:20.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.780 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:20.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.780 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:20.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.780 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:20.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.780 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:20.780 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.780 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:20.781 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:20.781 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:20.781 [2024-07-26 13:29:01.162040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:20.781 [2024-07-26 13:29:01.245474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.781 [2024-07-26 13:29:01.245480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.718 [2024-07-26 13:29:01.929293] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:29:21.718 13:29:01 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:21.718 13:29:01 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:29:21.718 13:29:01 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:29:21.718 13:29:01 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:21.718 13:29:01 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:25.007 [2024-07-26 13:29:05.071885] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xe22f00 PMD being used: compress_qat 00:29:25.007 13:29:05 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:25.007 13:29:05 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:25.007 13:29:05 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:25.007 13:29:05 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:25.007 13:29:05 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:25.007 13:29:05 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:25.007 13:29:05 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:25.007 13:29:05 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:25.007 [ 00:29:25.007 { 00:29:25.007 "name": "Nvme0n1", 00:29:25.007 "aliases": [ 00:29:25.007 "44fe81c0-70e5-43cf-93d2-0ed601d4de42" 00:29:25.007 ], 00:29:25.007 "product_name": "NVMe disk", 00:29:25.007 "block_size": 512, 00:29:25.007 "num_blocks": 3907029168, 00:29:25.007 "uuid": "44fe81c0-70e5-43cf-93d2-0ed601d4de42", 00:29:25.007 "assigned_rate_limits": { 00:29:25.007 "rw_ios_per_sec": 0, 00:29:25.007 "rw_mbytes_per_sec": 0, 00:29:25.007 "r_mbytes_per_sec": 0, 00:29:25.007 "w_mbytes_per_sec": 0 00:29:25.007 }, 00:29:25.007 "claimed": false, 00:29:25.007 "zoned": false, 00:29:25.007 "supported_io_types": { 00:29:25.007 "read": true, 00:29:25.007 "write": true, 00:29:25.007 "unmap": true, 00:29:25.007 "flush": true, 00:29:25.007 "reset": true, 00:29:25.007 "nvme_admin": true, 00:29:25.007 "nvme_io": true, 00:29:25.007 "nvme_io_md": false, 00:29:25.007 "write_zeroes": true, 00:29:25.007 "zcopy": false, 00:29:25.007 "get_zone_info": false, 00:29:25.007 "zone_management": false, 00:29:25.007 "zone_append": false, 00:29:25.007 "compare": false, 00:29:25.007 "compare_and_write": false, 00:29:25.007 "abort": true, 00:29:25.007 "seek_hole": false, 00:29:25.007 "seek_data": false, 00:29:25.007 "copy": false, 00:29:25.007 "nvme_iov_md": false 00:29:25.007 }, 00:29:25.007 "driver_specific": { 00:29:25.007 "nvme": [ 00:29:25.007 { 00:29:25.007 "pci_address": "0000:d8:00.0", 00:29:25.007 "trid": { 00:29:25.007 "trtype": "PCIe", 00:29:25.007 "traddr": "0000:d8:00.0" 00:29:25.007 }, 00:29:25.007 "ctrlr_data": { 00:29:25.007 "cntlid": 0, 00:29:25.007 "vendor_id": "0x8086", 00:29:25.007 "model_number": "INTEL SSDPE2KX020T8", 00:29:25.007 "serial_number": "BTLJ125505KA2P0BGN", 00:29:25.007 "firmware_revision": "VDV10170", 00:29:25.007 "oacs": { 00:29:25.007 "security": 0, 00:29:25.007 "format": 1, 00:29:25.007 "firmware": 1, 00:29:25.007 "ns_manage": 1 00:29:25.007 }, 00:29:25.007 "multi_ctrlr": false, 00:29:25.007 "ana_reporting": false 00:29:25.007 }, 00:29:25.007 "vs": { 00:29:25.007 "nvme_version": "1.2" 00:29:25.007 }, 00:29:25.007 "ns_data": { 00:29:25.007 "id": 1, 00:29:25.007 "can_share": false 00:29:25.007 } 00:29:25.007 } 00:29:25.007 ], 00:29:25.007 "mp_policy": "active_passive" 00:29:25.007 } 00:29:25.007 } 00:29:25.007 ] 00:29:25.267 13:29:05 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:25.267 13:29:05 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:25.267 [2024-07-26 13:29:05.756989] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc5a160 PMD being used: compress_qat 00:29:26.712 283f4517-578b-4bec-9cb9-2cce534e5a2c 00:29:26.712 13:29:06 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:26.712 0728e9b3-18ce-4c9c-8da0-26304d0b94a2 00:29:26.712 13:29:07 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:26.712 13:29:07 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:26.712 13:29:07 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:26.712 13:29:07 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:26.712 13:29:07 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:26.712 13:29:07 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:26.712 13:29:07 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:26.971 13:29:07 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:27.230 [ 00:29:27.230 { 00:29:27.230 "name": "0728e9b3-18ce-4c9c-8da0-26304d0b94a2", 00:29:27.230 "aliases": [ 00:29:27.230 "lvs0/lv0" 00:29:27.230 ], 00:29:27.230 "product_name": "Logical Volume", 00:29:27.230 "block_size": 512, 00:29:27.230 "num_blocks": 204800, 00:29:27.230 "uuid": "0728e9b3-18ce-4c9c-8da0-26304d0b94a2", 00:29:27.230 "assigned_rate_limits": { 00:29:27.230 "rw_ios_per_sec": 0, 00:29:27.230 "rw_mbytes_per_sec": 0, 00:29:27.230 "r_mbytes_per_sec": 0, 00:29:27.230 "w_mbytes_per_sec": 0 00:29:27.230 }, 00:29:27.230 "claimed": false, 00:29:27.230 "zoned": false, 00:29:27.230 "supported_io_types": { 00:29:27.230 "read": true, 00:29:27.230 "write": true, 00:29:27.230 "unmap": true, 00:29:27.230 "flush": false, 00:29:27.230 "reset": true, 00:29:27.230 "nvme_admin": false, 00:29:27.230 "nvme_io": false, 00:29:27.230 "nvme_io_md": false, 00:29:27.230 "write_zeroes": true, 00:29:27.230 "zcopy": false, 00:29:27.230 "get_zone_info": false, 00:29:27.230 "zone_management": false, 00:29:27.230 "zone_append": false, 00:29:27.230 "compare": false, 00:29:27.230 "compare_and_write": false, 00:29:27.230 "abort": false, 00:29:27.230 "seek_hole": true, 00:29:27.230 "seek_data": true, 00:29:27.230 "copy": false, 00:29:27.230 "nvme_iov_md": false 00:29:27.230 }, 00:29:27.230 "driver_specific": { 00:29:27.230 "lvol": { 00:29:27.230 "lvol_store_uuid": "283f4517-578b-4bec-9cb9-2cce534e5a2c", 00:29:27.230 "base_bdev": "Nvme0n1", 00:29:27.230 "thin_provision": true, 00:29:27.230 "num_allocated_clusters": 0, 00:29:27.230 "snapshot": false, 00:29:27.230 "clone": false, 00:29:27.230 "esnap_clone": false 00:29:27.230 } 00:29:27.230 } 00:29:27.230 } 00:29:27.230 ] 00:29:27.230 13:29:07 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:27.230 13:29:07 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:29:27.230 13:29:07 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:29:27.230 [2024-07-26 13:29:07.692261] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:27.230 COMP_lvs0/lv0 00:29:27.230 13:29:07 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:27.230 13:29:07 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:27.230 13:29:07 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:27.230 13:29:07 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:27.230 13:29:07 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:27.230 13:29:07 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:27.230 13:29:07 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:27.489 13:29:07 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:27.749 [ 00:29:27.749 { 00:29:27.749 "name": "COMP_lvs0/lv0", 00:29:27.749 "aliases": [ 00:29:27.749 "6964c3cb-0bf8-5ecc-87b3-040d4b6ec09d" 00:29:27.749 ], 00:29:27.749 "product_name": "compress", 00:29:27.749 "block_size": 512, 00:29:27.749 "num_blocks": 200704, 00:29:27.749 "uuid": "6964c3cb-0bf8-5ecc-87b3-040d4b6ec09d", 00:29:27.749 "assigned_rate_limits": { 00:29:27.749 "rw_ios_per_sec": 0, 00:29:27.749 "rw_mbytes_per_sec": 0, 00:29:27.749 "r_mbytes_per_sec": 0, 00:29:27.749 "w_mbytes_per_sec": 0 00:29:27.749 }, 00:29:27.749 "claimed": false, 00:29:27.749 "zoned": false, 00:29:27.749 "supported_io_types": { 00:29:27.749 "read": true, 00:29:27.749 "write": true, 00:29:27.749 "unmap": false, 00:29:27.749 "flush": false, 00:29:27.749 "reset": false, 00:29:27.749 "nvme_admin": false, 00:29:27.749 "nvme_io": false, 00:29:27.749 "nvme_io_md": false, 00:29:27.749 "write_zeroes": true, 00:29:27.749 "zcopy": false, 00:29:27.749 "get_zone_info": false, 00:29:27.749 "zone_management": false, 00:29:27.749 "zone_append": false, 00:29:27.749 "compare": false, 00:29:27.749 "compare_and_write": false, 00:29:27.749 "abort": false, 00:29:27.749 "seek_hole": false, 00:29:27.749 "seek_data": false, 00:29:27.749 "copy": false, 00:29:27.749 "nvme_iov_md": false 00:29:27.749 }, 00:29:27.749 "driver_specific": { 00:29:27.749 "compress": { 00:29:27.750 "name": "COMP_lvs0/lv0", 00:29:27.750 "base_bdev_name": "0728e9b3-18ce-4c9c-8da0-26304d0b94a2", 00:29:27.750 "pm_path": "/tmp/pmem/2fed4e07-7642-4278-8a44-1052256d9a1a" 00:29:27.750 } 00:29:27.750 } 00:29:27.750 } 00:29:27.750 ] 00:29:27.750 13:29:08 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:27.750 13:29:08 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:27.750 [2024-07-26 13:29:08.254662] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fb2781b15c0 PMD being used: compress_qat 00:29:27.750 [2024-07-26 13:29:08.256711] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xe1f8b0 PMD being used: compress_qat 00:29:27.750 Running I/O for 3 seconds... 00:29:31.041 00:29:31.041 Latency(us) 00:29:31.041 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.041 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:29:31.041 Verification LBA range: start 0x0 length 0x3100 00:29:31.041 COMP_lvs0/lv0 : 3.01 4112.09 16.06 0.00 0.00 7729.74 125.34 12949.91 00:29:31.042 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:29:31.042 Verification LBA range: start 0x3100 length 0x3100 00:29:31.042 COMP_lvs0/lv0 : 3.01 4218.40 16.48 0.00 0.00 7548.67 121.24 12792.63 00:29:31.042 =================================================================================================================== 00:29:31.042 Total : 8330.49 32.54 0.00 0.00 7638.04 121.24 12949.91 00:29:31.042 0 00:29:31.042 13:29:11 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:29:31.042 13:29:11 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:31.042 13:29:11 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:31.301 13:29:11 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:31.301 13:29:11 compress_compdev -- compress/compress.sh@78 -- # killprocess 856446 00:29:31.301 13:29:11 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 856446 ']' 00:29:31.301 13:29:11 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 856446 00:29:31.301 13:29:11 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:29:31.301 13:29:11 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:31.301 13:29:11 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 856446 00:29:31.301 13:29:11 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:31.301 13:29:11 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:31.301 13:29:11 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 856446' 00:29:31.301 killing process with pid 856446 00:29:31.301 13:29:11 compress_compdev -- common/autotest_common.sh@969 -- # kill 856446 00:29:31.301 Received shutdown signal, test time was about 3.000000 seconds 00:29:31.301 00:29:31.301 Latency(us) 00:29:31.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.301 =================================================================================================================== 00:29:31.301 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.301 13:29:11 compress_compdev -- common/autotest_common.sh@974 -- # wait 856446 00:29:33.836 13:29:14 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:29:33.836 13:29:14 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:29:33.836 13:29:14 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=858725 00:29:33.836 13:29:14 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:33.836 13:29:14 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:29:33.836 13:29:14 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 858725 00:29:33.836 13:29:14 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 858725 ']' 00:29:33.836 13:29:14 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.836 13:29:14 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:33.836 13:29:14 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.836 13:29:14 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:33.836 13:29:14 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:33.836 [2024-07-26 13:29:14.275584] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:29:33.836 [2024-07-26 13:29:14.275652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid858725 ] 00:29:33.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.836 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:33.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.836 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:33.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.836 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:33.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.836 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:33.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.836 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:33.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.836 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:33.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.836 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:33.836 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:33.837 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.837 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:34.096 [2024-07-26 13:29:14.396232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:34.096 [2024-07-26 13:29:14.480439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.096 [2024-07-26 13:29:14.480444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.665 [2024-07-26 13:29:15.154426] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:29:34.924 13:29:15 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:34.924 13:29:15 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:29:34.924 13:29:15 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:29:34.924 13:29:15 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:34.924 13:29:15 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:38.213 [2024-07-26 13:29:18.303772] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x19abf00 PMD being used: compress_qat 00:29:38.213 13:29:18 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:38.213 13:29:18 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:38.213 13:29:18 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:38.213 13:29:18 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:38.213 13:29:18 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:38.213 13:29:18 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:38.213 13:29:18 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:38.213 13:29:18 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:38.473 [ 00:29:38.473 { 00:29:38.473 "name": "Nvme0n1", 00:29:38.473 "aliases": [ 00:29:38.473 "2db6d9e2-ea0d-4de0-98ad-23351463661c" 00:29:38.473 ], 00:29:38.473 "product_name": "NVMe disk", 00:29:38.473 "block_size": 512, 00:29:38.473 "num_blocks": 3907029168, 00:29:38.473 "uuid": "2db6d9e2-ea0d-4de0-98ad-23351463661c", 00:29:38.473 "assigned_rate_limits": { 00:29:38.473 "rw_ios_per_sec": 0, 00:29:38.473 "rw_mbytes_per_sec": 0, 00:29:38.473 "r_mbytes_per_sec": 0, 00:29:38.473 "w_mbytes_per_sec": 0 00:29:38.473 }, 00:29:38.473 "claimed": false, 00:29:38.473 "zoned": false, 00:29:38.473 "supported_io_types": { 00:29:38.473 "read": true, 00:29:38.473 "write": true, 00:29:38.473 "unmap": true, 00:29:38.473 "flush": true, 00:29:38.473 "reset": true, 00:29:38.473 "nvme_admin": true, 00:29:38.473 "nvme_io": true, 00:29:38.473 "nvme_io_md": false, 00:29:38.473 "write_zeroes": true, 00:29:38.473 "zcopy": false, 00:29:38.473 "get_zone_info": false, 00:29:38.473 "zone_management": false, 00:29:38.473 "zone_append": false, 00:29:38.473 "compare": false, 00:29:38.473 "compare_and_write": false, 00:29:38.473 "abort": true, 00:29:38.473 "seek_hole": false, 00:29:38.473 "seek_data": false, 00:29:38.473 "copy": false, 00:29:38.473 "nvme_iov_md": false 00:29:38.473 }, 00:29:38.473 "driver_specific": { 00:29:38.473 "nvme": [ 00:29:38.473 { 00:29:38.473 "pci_address": "0000:d8:00.0", 00:29:38.473 "trid": { 00:29:38.473 "trtype": "PCIe", 00:29:38.473 "traddr": "0000:d8:00.0" 00:29:38.473 }, 00:29:38.473 "ctrlr_data": { 00:29:38.473 "cntlid": 0, 00:29:38.473 "vendor_id": "0x8086", 00:29:38.473 "model_number": "INTEL SSDPE2KX020T8", 00:29:38.473 "serial_number": "BTLJ125505KA2P0BGN", 00:29:38.473 "firmware_revision": "VDV10170", 00:29:38.473 "oacs": { 00:29:38.473 "security": 0, 00:29:38.473 "format": 1, 00:29:38.473 "firmware": 1, 00:29:38.473 "ns_manage": 1 00:29:38.473 }, 00:29:38.473 "multi_ctrlr": false, 00:29:38.473 "ana_reporting": false 00:29:38.473 }, 00:29:38.473 "vs": { 00:29:38.473 "nvme_version": "1.2" 00:29:38.473 }, 00:29:38.473 "ns_data": { 00:29:38.473 "id": 1, 00:29:38.473 "can_share": false 00:29:38.473 } 00:29:38.473 } 00:29:38.473 ], 00:29:38.473 "mp_policy": "active_passive" 00:29:38.473 } 00:29:38.473 } 00:29:38.473 ] 00:29:38.473 13:29:18 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:38.473 13:29:18 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:38.473 [2024-07-26 13:29:18.996940] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x17e3160 PMD being used: compress_qat 00:29:39.852 6c85dae3-be03-4dd6-a5e9-3dbe12c5939b 00:29:39.852 13:29:19 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:39.852 aa917e11-2a63-4e19-8e6c-5e78a64f7ceb 00:29:39.852 13:29:20 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:39.852 13:29:20 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:39.852 13:29:20 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:39.852 13:29:20 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:39.852 13:29:20 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:39.852 13:29:20 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:39.852 13:29:20 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:40.111 13:29:20 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:40.371 [ 00:29:40.371 { 00:29:40.371 "name": "aa917e11-2a63-4e19-8e6c-5e78a64f7ceb", 00:29:40.371 "aliases": [ 00:29:40.371 "lvs0/lv0" 00:29:40.371 ], 00:29:40.371 "product_name": "Logical Volume", 00:29:40.371 "block_size": 512, 00:29:40.371 "num_blocks": 204800, 00:29:40.371 "uuid": "aa917e11-2a63-4e19-8e6c-5e78a64f7ceb", 00:29:40.371 "assigned_rate_limits": { 00:29:40.371 "rw_ios_per_sec": 0, 00:29:40.371 "rw_mbytes_per_sec": 0, 00:29:40.371 "r_mbytes_per_sec": 0, 00:29:40.371 "w_mbytes_per_sec": 0 00:29:40.371 }, 00:29:40.371 "claimed": false, 00:29:40.371 "zoned": false, 00:29:40.371 "supported_io_types": { 00:29:40.371 "read": true, 00:29:40.371 "write": true, 00:29:40.371 "unmap": true, 00:29:40.371 "flush": false, 00:29:40.371 "reset": true, 00:29:40.371 "nvme_admin": false, 00:29:40.371 "nvme_io": false, 00:29:40.371 "nvme_io_md": false, 00:29:40.371 "write_zeroes": true, 00:29:40.371 "zcopy": false, 00:29:40.371 "get_zone_info": false, 00:29:40.371 "zone_management": false, 00:29:40.371 "zone_append": false, 00:29:40.371 "compare": false, 00:29:40.371 "compare_and_write": false, 00:29:40.371 "abort": false, 00:29:40.371 "seek_hole": true, 00:29:40.371 "seek_data": true, 00:29:40.371 "copy": false, 00:29:40.371 "nvme_iov_md": false 00:29:40.371 }, 00:29:40.371 "driver_specific": { 00:29:40.371 "lvol": { 00:29:40.371 "lvol_store_uuid": "6c85dae3-be03-4dd6-a5e9-3dbe12c5939b", 00:29:40.371 "base_bdev": "Nvme0n1", 00:29:40.371 "thin_provision": true, 00:29:40.371 "num_allocated_clusters": 0, 00:29:40.371 "snapshot": false, 00:29:40.371 "clone": false, 00:29:40.371 "esnap_clone": false 00:29:40.371 } 00:29:40.371 } 00:29:40.371 } 00:29:40.371 ] 00:29:40.371 13:29:20 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:40.371 13:29:20 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:29:40.371 13:29:20 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:29:40.371 [2024-07-26 13:29:20.891693] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:40.371 COMP_lvs0/lv0 00:29:40.630 13:29:20 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:40.630 13:29:20 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:40.630 13:29:20 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:40.630 13:29:20 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:40.630 13:29:20 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:40.630 13:29:20 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:40.630 13:29:20 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:40.630 13:29:21 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:40.889 [ 00:29:40.889 { 00:29:40.889 "name": "COMP_lvs0/lv0", 00:29:40.889 "aliases": [ 00:29:40.889 "4d12b567-1ca2-55c3-84c2-1dbc4304f896" 00:29:40.889 ], 00:29:40.889 "product_name": "compress", 00:29:40.889 "block_size": 4096, 00:29:40.889 "num_blocks": 25088, 00:29:40.889 "uuid": "4d12b567-1ca2-55c3-84c2-1dbc4304f896", 00:29:40.889 "assigned_rate_limits": { 00:29:40.889 "rw_ios_per_sec": 0, 00:29:40.889 "rw_mbytes_per_sec": 0, 00:29:40.889 "r_mbytes_per_sec": 0, 00:29:40.889 "w_mbytes_per_sec": 0 00:29:40.889 }, 00:29:40.889 "claimed": false, 00:29:40.889 "zoned": false, 00:29:40.889 "supported_io_types": { 00:29:40.889 "read": true, 00:29:40.889 "write": true, 00:29:40.889 "unmap": false, 00:29:40.889 "flush": false, 00:29:40.889 "reset": false, 00:29:40.889 "nvme_admin": false, 00:29:40.889 "nvme_io": false, 00:29:40.889 "nvme_io_md": false, 00:29:40.889 "write_zeroes": true, 00:29:40.889 "zcopy": false, 00:29:40.889 "get_zone_info": false, 00:29:40.889 "zone_management": false, 00:29:40.889 "zone_append": false, 00:29:40.889 "compare": false, 00:29:40.889 "compare_and_write": false, 00:29:40.889 "abort": false, 00:29:40.889 "seek_hole": false, 00:29:40.889 "seek_data": false, 00:29:40.889 "copy": false, 00:29:40.889 "nvme_iov_md": false 00:29:40.889 }, 00:29:40.889 "driver_specific": { 00:29:40.889 "compress": { 00:29:40.889 "name": "COMP_lvs0/lv0", 00:29:40.889 "base_bdev_name": "aa917e11-2a63-4e19-8e6c-5e78a64f7ceb", 00:29:40.889 "pm_path": "/tmp/pmem/f3b5fa8a-897f-4706-a258-310d3d5bb583" 00:29:40.889 } 00:29:40.889 } 00:29:40.889 } 00:29:40.889 ] 00:29:40.889 13:29:21 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:40.889 13:29:21 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:41.149 [2024-07-26 13:29:21.457974] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f19ec1b15c0 PMD being used: compress_qat 00:29:41.149 [2024-07-26 13:29:21.460055] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x19a88b0 PMD being used: compress_qat 00:29:41.149 Running I/O for 3 seconds... 00:29:44.436 00:29:44.436 Latency(us) 00:29:44.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.436 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:29:44.436 Verification LBA range: start 0x0 length 0x3100 00:29:44.436 COMP_lvs0/lv0 : 3.01 4053.62 15.83 0.00 0.00 7829.57 176.95 14575.21 00:29:44.436 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:29:44.436 Verification LBA range: start 0x3100 length 0x3100 00:29:44.436 COMP_lvs0/lv0 : 3.01 4152.13 16.22 0.00 0.00 7661.48 167.12 14470.35 00:29:44.436 =================================================================================================================== 00:29:44.436 Total : 8205.75 32.05 0.00 0.00 7744.55 167.12 14575.21 00:29:44.436 0 00:29:44.436 13:29:24 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:29:44.436 13:29:24 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:44.436 13:29:24 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:44.436 13:29:24 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:44.436 13:29:24 compress_compdev -- compress/compress.sh@78 -- # killprocess 858725 00:29:44.436 13:29:24 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 858725 ']' 00:29:44.436 13:29:24 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 858725 00:29:44.436 13:29:24 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:29:44.695 13:29:24 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:44.695 13:29:24 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 858725 00:29:44.695 13:29:25 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:44.695 13:29:25 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:44.695 13:29:25 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 858725' 00:29:44.695 killing process with pid 858725 00:29:44.695 13:29:25 compress_compdev -- common/autotest_common.sh@969 -- # kill 858725 00:29:44.695 Received shutdown signal, test time was about 3.000000 seconds 00:29:44.695 00:29:44.695 Latency(us) 00:29:44.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.695 =================================================================================================================== 00:29:44.695 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.695 13:29:25 compress_compdev -- common/autotest_common.sh@974 -- # wait 858725 00:29:47.251 13:29:27 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:29:47.251 13:29:27 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:29:47.251 13:29:27 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=860938 00:29:47.251 13:29:27 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:47.251 13:29:27 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:29:47.251 13:29:27 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 860938 00:29:47.251 13:29:27 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 860938 ']' 00:29:47.252 13:29:27 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.252 13:29:27 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:47.252 13:29:27 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.252 13:29:27 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:47.252 13:29:27 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:47.252 [2024-07-26 13:29:27.533484] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:29:47.252 [2024-07-26 13:29:27.533547] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid860938 ] 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:47.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:47.252 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:47.252 [2024-07-26 13:29:27.664468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:47.252 [2024-07-26 13:29:27.746954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.252 [2024-07-26 13:29:27.747046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.252 [2024-07-26 13:29:27.747050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.195 [2024-07-26 13:29:28.426052] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:29:48.195 13:29:28 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:48.195 13:29:28 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:29:48.195 13:29:28 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:29:48.195 13:29:28 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:48.195 13:29:28 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:51.485 [2024-07-26 13:29:31.520822] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x261faa0 PMD being used: compress_qat 00:29:51.485 13:29:31 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:29:51.485 13:29:31 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:29:51.485 13:29:31 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:51.485 13:29:31 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:51.485 13:29:31 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:51.485 13:29:31 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:51.485 13:29:31 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:51.485 13:29:31 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:29:51.485 [ 00:29:51.485 { 00:29:51.485 "name": "Nvme0n1", 00:29:51.485 "aliases": [ 00:29:51.485 "e404121c-5078-49fb-a5be-1392fb6b6c59" 00:29:51.485 ], 00:29:51.485 "product_name": "NVMe disk", 00:29:51.485 "block_size": 512, 00:29:51.485 "num_blocks": 3907029168, 00:29:51.485 "uuid": "e404121c-5078-49fb-a5be-1392fb6b6c59", 00:29:51.485 "assigned_rate_limits": { 00:29:51.485 "rw_ios_per_sec": 0, 00:29:51.485 "rw_mbytes_per_sec": 0, 00:29:51.485 "r_mbytes_per_sec": 0, 00:29:51.485 "w_mbytes_per_sec": 0 00:29:51.485 }, 00:29:51.485 "claimed": false, 00:29:51.485 "zoned": false, 00:29:51.485 "supported_io_types": { 00:29:51.485 "read": true, 00:29:51.485 "write": true, 00:29:51.485 "unmap": true, 00:29:51.485 "flush": true, 00:29:51.485 "reset": true, 00:29:51.485 "nvme_admin": true, 00:29:51.485 "nvme_io": true, 00:29:51.485 "nvme_io_md": false, 00:29:51.485 "write_zeroes": true, 00:29:51.485 "zcopy": false, 00:29:51.485 "get_zone_info": false, 00:29:51.485 "zone_management": false, 00:29:51.485 "zone_append": false, 00:29:51.485 "compare": false, 00:29:51.485 "compare_and_write": false, 00:29:51.485 "abort": true, 00:29:51.485 "seek_hole": false, 00:29:51.485 "seek_data": false, 00:29:51.485 "copy": false, 00:29:51.485 "nvme_iov_md": false 00:29:51.485 }, 00:29:51.485 "driver_specific": { 00:29:51.485 "nvme": [ 00:29:51.485 { 00:29:51.485 "pci_address": "0000:d8:00.0", 00:29:51.485 "trid": { 00:29:51.485 "trtype": "PCIe", 00:29:51.485 "traddr": "0000:d8:00.0" 00:29:51.485 }, 00:29:51.485 "ctrlr_data": { 00:29:51.485 "cntlid": 0, 00:29:51.485 "vendor_id": "0x8086", 00:29:51.485 "model_number": "INTEL SSDPE2KX020T8", 00:29:51.485 "serial_number": "BTLJ125505KA2P0BGN", 00:29:51.485 "firmware_revision": "VDV10170", 00:29:51.485 "oacs": { 00:29:51.485 "security": 0, 00:29:51.485 "format": 1, 00:29:51.485 "firmware": 1, 00:29:51.485 "ns_manage": 1 00:29:51.485 }, 00:29:51.485 "multi_ctrlr": false, 00:29:51.485 "ana_reporting": false 00:29:51.485 }, 00:29:51.485 "vs": { 00:29:51.485 "nvme_version": "1.2" 00:29:51.485 }, 00:29:51.485 "ns_data": { 00:29:51.485 "id": 1, 00:29:51.485 "can_share": false 00:29:51.485 } 00:29:51.485 } 00:29:51.485 ], 00:29:51.485 "mp_policy": "active_passive" 00:29:51.485 } 00:29:51.485 } 00:29:51.485 ] 00:29:51.485 13:29:32 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:51.485 13:29:32 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:29:51.745 [2024-07-26 13:29:32.213538] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2621f60 PMD being used: compress_qat 00:29:53.123 baa36acf-a47a-497b-99d0-4d137c92c5b0 00:29:53.123 13:29:33 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:29:53.123 b072d9c0-51f2-499c-93e4-b6e3a5742d9c 00:29:53.123 13:29:33 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:29:53.123 13:29:33 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:29:53.123 13:29:33 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:53.123 13:29:33 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:53.123 13:29:33 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:53.123 13:29:33 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:53.123 13:29:33 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:53.382 13:29:33 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:29:53.640 [ 00:29:53.640 { 00:29:53.641 "name": "b072d9c0-51f2-499c-93e4-b6e3a5742d9c", 00:29:53.641 "aliases": [ 00:29:53.641 "lvs0/lv0" 00:29:53.641 ], 00:29:53.641 "product_name": "Logical Volume", 00:29:53.641 "block_size": 512, 00:29:53.641 "num_blocks": 204800, 00:29:53.641 "uuid": "b072d9c0-51f2-499c-93e4-b6e3a5742d9c", 00:29:53.641 "assigned_rate_limits": { 00:29:53.641 "rw_ios_per_sec": 0, 00:29:53.641 "rw_mbytes_per_sec": 0, 00:29:53.641 "r_mbytes_per_sec": 0, 00:29:53.641 "w_mbytes_per_sec": 0 00:29:53.641 }, 00:29:53.641 "claimed": false, 00:29:53.641 "zoned": false, 00:29:53.641 "supported_io_types": { 00:29:53.641 "read": true, 00:29:53.641 "write": true, 00:29:53.641 "unmap": true, 00:29:53.641 "flush": false, 00:29:53.641 "reset": true, 00:29:53.641 "nvme_admin": false, 00:29:53.641 "nvme_io": false, 00:29:53.641 "nvme_io_md": false, 00:29:53.641 "write_zeroes": true, 00:29:53.641 "zcopy": false, 00:29:53.641 "get_zone_info": false, 00:29:53.641 "zone_management": false, 00:29:53.641 "zone_append": false, 00:29:53.641 "compare": false, 00:29:53.641 "compare_and_write": false, 00:29:53.641 "abort": false, 00:29:53.641 "seek_hole": true, 00:29:53.641 "seek_data": true, 00:29:53.641 "copy": false, 00:29:53.641 "nvme_iov_md": false 00:29:53.641 }, 00:29:53.641 "driver_specific": { 00:29:53.641 "lvol": { 00:29:53.641 "lvol_store_uuid": "baa36acf-a47a-497b-99d0-4d137c92c5b0", 00:29:53.641 "base_bdev": "Nvme0n1", 00:29:53.641 "thin_provision": true, 00:29:53.641 "num_allocated_clusters": 0, 00:29:53.641 "snapshot": false, 00:29:53.641 "clone": false, 00:29:53.641 "esnap_clone": false 00:29:53.641 } 00:29:53.641 } 00:29:53.641 } 00:29:53.641 ] 00:29:53.641 13:29:33 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:53.641 13:29:33 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:29:53.641 13:29:33 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:29:53.900 [2024-07-26 13:29:34.198828] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:29:53.900 COMP_lvs0/lv0 00:29:53.900 13:29:34 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:29:53.900 13:29:34 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:29:53.900 13:29:34 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:53.900 13:29:34 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:29:53.900 13:29:34 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:53.900 13:29:34 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:53.900 13:29:34 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:54.159 13:29:34 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:29:54.159 [ 00:29:54.159 { 00:29:54.159 "name": "COMP_lvs0/lv0", 00:29:54.159 "aliases": [ 00:29:54.159 "2d15ea20-0b45-5255-a96c-a301961e33e4" 00:29:54.159 ], 00:29:54.159 "product_name": "compress", 00:29:54.159 "block_size": 512, 00:29:54.159 "num_blocks": 200704, 00:29:54.159 "uuid": "2d15ea20-0b45-5255-a96c-a301961e33e4", 00:29:54.159 "assigned_rate_limits": { 00:29:54.159 "rw_ios_per_sec": 0, 00:29:54.159 "rw_mbytes_per_sec": 0, 00:29:54.159 "r_mbytes_per_sec": 0, 00:29:54.159 "w_mbytes_per_sec": 0 00:29:54.159 }, 00:29:54.159 "claimed": false, 00:29:54.159 "zoned": false, 00:29:54.159 "supported_io_types": { 00:29:54.159 "read": true, 00:29:54.159 "write": true, 00:29:54.159 "unmap": false, 00:29:54.159 "flush": false, 00:29:54.159 "reset": false, 00:29:54.159 "nvme_admin": false, 00:29:54.159 "nvme_io": false, 00:29:54.159 "nvme_io_md": false, 00:29:54.159 "write_zeroes": true, 00:29:54.159 "zcopy": false, 00:29:54.159 "get_zone_info": false, 00:29:54.159 "zone_management": false, 00:29:54.159 "zone_append": false, 00:29:54.159 "compare": false, 00:29:54.159 "compare_and_write": false, 00:29:54.159 "abort": false, 00:29:54.159 "seek_hole": false, 00:29:54.159 "seek_data": false, 00:29:54.159 "copy": false, 00:29:54.159 "nvme_iov_md": false 00:29:54.159 }, 00:29:54.159 "driver_specific": { 00:29:54.159 "compress": { 00:29:54.159 "name": "COMP_lvs0/lv0", 00:29:54.159 "base_bdev_name": "b072d9c0-51f2-499c-93e4-b6e3a5742d9c", 00:29:54.159 "pm_path": "/tmp/pmem/e55e9414-7086-4538-98af-f315913335e5" 00:29:54.159 } 00:29:54.159 } 00:29:54.159 } 00:29:54.159 ] 00:29:54.159 13:29:34 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:29:54.159 13:29:34 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:54.418 [2024-07-26 13:29:34.755790] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fd9f01b1350 PMD being used: compress_qat 00:29:54.418 I/O targets: 00:29:54.418 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:29:54.418 00:29:54.418 00:29:54.418 CUnit - A unit testing framework for C - Version 2.1-3 00:29:54.418 http://cunit.sourceforge.net/ 00:29:54.418 00:29:54.418 00:29:54.418 Suite: bdevio tests on: COMP_lvs0/lv0 00:29:54.418 Test: blockdev write read block ...passed 00:29:54.418 Test: blockdev write zeroes read block ...passed 00:29:54.418 Test: blockdev write zeroes read no split ...passed 00:29:54.418 Test: blockdev write zeroes read split ...passed 00:29:54.418 Test: blockdev write zeroes read split partial ...passed 00:29:54.418 Test: blockdev reset ...[2024-07-26 13:29:34.802017] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:29:54.418 passed 00:29:54.418 Test: blockdev write read 8 blocks ...passed 00:29:54.418 Test: blockdev write read size > 128k ...passed 00:29:54.418 Test: blockdev write read invalid size ...passed 00:29:54.418 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:54.418 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:54.418 Test: blockdev write read max offset ...passed 00:29:54.418 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:54.418 Test: blockdev writev readv 8 blocks ...passed 00:29:54.419 Test: blockdev writev readv 30 x 1block ...passed 00:29:54.419 Test: blockdev writev readv block ...passed 00:29:54.419 Test: blockdev writev readv size > 128k ...passed 00:29:54.419 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:54.419 Test: blockdev comparev and writev ...passed 00:29:54.419 Test: blockdev nvme passthru rw ...passed 00:29:54.419 Test: blockdev nvme passthru vendor specific ...passed 00:29:54.419 Test: blockdev nvme admin passthru ...passed 00:29:54.419 Test: blockdev copy ...passed 00:29:54.419 00:29:54.419 Run Summary: Type Total Ran Passed Failed Inactive 00:29:54.419 suites 1 1 n/a 0 0 00:29:54.419 tests 23 23 23 0 0 00:29:54.419 asserts 130 130 130 0 n/a 00:29:54.419 00:29:54.419 Elapsed time = 0.170 seconds 00:29:54.419 0 00:29:54.419 13:29:34 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:29:54.419 13:29:34 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:29:54.678 13:29:35 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:29:54.937 13:29:35 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:29:54.937 13:29:35 compress_compdev -- compress/compress.sh@62 -- # killprocess 860938 00:29:54.937 13:29:35 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 860938 ']' 00:29:54.937 13:29:35 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 860938 00:29:54.937 13:29:35 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:29:54.937 13:29:35 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:54.937 13:29:35 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 860938 00:29:54.937 13:29:35 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:54.937 13:29:35 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:54.937 13:29:35 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 860938' 00:29:54.937 killing process with pid 860938 00:29:54.937 13:29:35 compress_compdev -- common/autotest_common.sh@969 -- # kill 860938 00:29:54.937 13:29:35 compress_compdev -- common/autotest_common.sh@974 -- # wait 860938 00:29:57.474 13:29:37 compress_compdev -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:29:57.474 13:29:37 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:29:57.474 00:29:57.474 real 0m50.142s 00:29:57.474 user 1m52.969s 00:29:57.474 sys 0m5.520s 00:29:57.474 13:29:37 compress_compdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:57.474 13:29:37 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:29:57.474 ************************************ 00:29:57.474 END TEST compress_compdev 00:29:57.474 ************************************ 00:29:57.474 13:29:37 -- spdk/autotest.sh@353 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:29:57.474 13:29:37 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:57.474 13:29:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:57.474 13:29:37 -- common/autotest_common.sh@10 -- # set +x 00:29:57.474 ************************************ 00:29:57.474 START TEST compress_isal 00:29:57.474 ************************************ 00:29:57.474 13:29:37 compress_isal -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:29:57.474 * Looking for test storage... 00:29:57.474 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:29:57.474 13:29:37 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.474 13:29:37 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:29:57.474 13:29:37 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.474 13:29:37 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.474 13:29:37 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.474 13:29:37 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.474 13:29:37 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.474 13:29:37 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.474 13:29:37 compress_isal -- paths/export.sh@5 -- # export PATH 00:29:57.475 13:29:37 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.475 13:29:37 compress_isal -- nvmf/common.sh@47 -- # : 0 00:29:57.475 13:29:37 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.475 13:29:37 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.475 13:29:37 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.475 13:29:37 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.475 13:29:37 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.475 13:29:37 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.475 13:29:37 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.475 13:29:37 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.475 13:29:37 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:57.475 13:29:37 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:29:57.475 13:29:37 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:29:57.475 13:29:37 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:29:57.475 13:29:37 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:29:57.475 13:29:37 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:29:57.475 13:29:37 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=862609 00:29:57.475 13:29:37 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:57.475 13:29:37 compress_isal -- compress/compress.sh@73 -- # waitforlisten 862609 00:29:57.475 13:29:37 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 862609 ']' 00:29:57.475 13:29:37 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.475 13:29:37 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:57.475 13:29:37 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.475 13:29:37 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:57.475 13:29:37 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:29:57.475 [2024-07-26 13:29:37.979097] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:29:57.475 [2024-07-26 13:29:37.979177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862609 ] 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:57.735 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:57.735 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:57.735 [2024-07-26 13:29:38.101460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:57.735 [2024-07-26 13:29:38.189774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.735 [2024-07-26 13:29:38.189779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.673 13:29:38 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:58.673 13:29:38 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:29:58.673 13:29:38 compress_isal -- compress/compress.sh@74 -- # create_vols 00:29:58.673 13:29:38 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:58.673 13:29:38 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:01.962 13:29:42 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:01.962 13:29:42 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:30:01.962 13:29:42 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:01.962 13:29:42 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:01.962 13:29:42 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:01.962 13:29:42 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:01.962 13:29:42 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:01.962 13:29:42 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:02.222 [ 00:30:02.222 { 00:30:02.222 "name": "Nvme0n1", 00:30:02.222 "aliases": [ 00:30:02.222 "04005ee4-3e36-4cc4-a453-fc79c55309cb" 00:30:02.222 ], 00:30:02.222 "product_name": "NVMe disk", 00:30:02.222 "block_size": 512, 00:30:02.222 "num_blocks": 3907029168, 00:30:02.222 "uuid": "04005ee4-3e36-4cc4-a453-fc79c55309cb", 00:30:02.222 "assigned_rate_limits": { 00:30:02.222 "rw_ios_per_sec": 0, 00:30:02.222 "rw_mbytes_per_sec": 0, 00:30:02.222 "r_mbytes_per_sec": 0, 00:30:02.222 "w_mbytes_per_sec": 0 00:30:02.222 }, 00:30:02.222 "claimed": false, 00:30:02.222 "zoned": false, 00:30:02.222 "supported_io_types": { 00:30:02.222 "read": true, 00:30:02.222 "write": true, 00:30:02.222 "unmap": true, 00:30:02.222 "flush": true, 00:30:02.222 "reset": true, 00:30:02.222 "nvme_admin": true, 00:30:02.222 "nvme_io": true, 00:30:02.222 "nvme_io_md": false, 00:30:02.222 "write_zeroes": true, 00:30:02.222 "zcopy": false, 00:30:02.222 "get_zone_info": false, 00:30:02.222 "zone_management": false, 00:30:02.222 "zone_append": false, 00:30:02.222 "compare": false, 00:30:02.222 "compare_and_write": false, 00:30:02.222 "abort": true, 00:30:02.222 "seek_hole": false, 00:30:02.222 "seek_data": false, 00:30:02.222 "copy": false, 00:30:02.222 "nvme_iov_md": false 00:30:02.222 }, 00:30:02.222 "driver_specific": { 00:30:02.222 "nvme": [ 00:30:02.222 { 00:30:02.222 "pci_address": "0000:d8:00.0", 00:30:02.222 "trid": { 00:30:02.222 "trtype": "PCIe", 00:30:02.222 "traddr": "0000:d8:00.0" 00:30:02.222 }, 00:30:02.222 "ctrlr_data": { 00:30:02.222 "cntlid": 0, 00:30:02.222 "vendor_id": "0x8086", 00:30:02.222 "model_number": "INTEL SSDPE2KX020T8", 00:30:02.222 "serial_number": "BTLJ125505KA2P0BGN", 00:30:02.222 "firmware_revision": "VDV10170", 00:30:02.222 "oacs": { 00:30:02.222 "security": 0, 00:30:02.222 "format": 1, 00:30:02.222 "firmware": 1, 00:30:02.222 "ns_manage": 1 00:30:02.222 }, 00:30:02.222 "multi_ctrlr": false, 00:30:02.222 "ana_reporting": false 00:30:02.222 }, 00:30:02.222 "vs": { 00:30:02.222 "nvme_version": "1.2" 00:30:02.222 }, 00:30:02.222 "ns_data": { 00:30:02.222 "id": 1, 00:30:02.222 "can_share": false 00:30:02.222 } 00:30:02.222 } 00:30:02.222 ], 00:30:02.222 "mp_policy": "active_passive" 00:30:02.222 } 00:30:02.222 } 00:30:02.222 ] 00:30:02.222 13:29:42 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:02.222 13:29:42 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:03.602 b2918567-b981-4681-83d2-f16563e6a5e1 00:30:03.602 13:29:43 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:03.861 98bd8f0a-6162-4dc5-b888-ac581118f709 00:30:03.861 13:29:44 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:03.861 13:29:44 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:30:03.861 13:29:44 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:03.861 13:29:44 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:03.861 13:29:44 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:03.861 13:29:44 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:03.861 13:29:44 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:04.120 13:29:44 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:04.380 [ 00:30:04.380 { 00:30:04.380 "name": "98bd8f0a-6162-4dc5-b888-ac581118f709", 00:30:04.380 "aliases": [ 00:30:04.380 "lvs0/lv0" 00:30:04.380 ], 00:30:04.380 "product_name": "Logical Volume", 00:30:04.380 "block_size": 512, 00:30:04.380 "num_blocks": 204800, 00:30:04.380 "uuid": "98bd8f0a-6162-4dc5-b888-ac581118f709", 00:30:04.380 "assigned_rate_limits": { 00:30:04.380 "rw_ios_per_sec": 0, 00:30:04.380 "rw_mbytes_per_sec": 0, 00:30:04.380 "r_mbytes_per_sec": 0, 00:30:04.380 "w_mbytes_per_sec": 0 00:30:04.380 }, 00:30:04.380 "claimed": false, 00:30:04.380 "zoned": false, 00:30:04.380 "supported_io_types": { 00:30:04.380 "read": true, 00:30:04.380 "write": true, 00:30:04.380 "unmap": true, 00:30:04.380 "flush": false, 00:30:04.380 "reset": true, 00:30:04.380 "nvme_admin": false, 00:30:04.380 "nvme_io": false, 00:30:04.380 "nvme_io_md": false, 00:30:04.380 "write_zeroes": true, 00:30:04.380 "zcopy": false, 00:30:04.380 "get_zone_info": false, 00:30:04.380 "zone_management": false, 00:30:04.380 "zone_append": false, 00:30:04.380 "compare": false, 00:30:04.380 "compare_and_write": false, 00:30:04.380 "abort": false, 00:30:04.380 "seek_hole": true, 00:30:04.380 "seek_data": true, 00:30:04.380 "copy": false, 00:30:04.380 "nvme_iov_md": false 00:30:04.380 }, 00:30:04.380 "driver_specific": { 00:30:04.380 "lvol": { 00:30:04.380 "lvol_store_uuid": "b2918567-b981-4681-83d2-f16563e6a5e1", 00:30:04.380 "base_bdev": "Nvme0n1", 00:30:04.380 "thin_provision": true, 00:30:04.380 "num_allocated_clusters": 0, 00:30:04.380 "snapshot": false, 00:30:04.380 "clone": false, 00:30:04.380 "esnap_clone": false 00:30:04.380 } 00:30:04.380 } 00:30:04.380 } 00:30:04.380 ] 00:30:04.380 13:29:44 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:04.380 13:29:44 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:30:04.380 13:29:44 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:30:04.380 [2024-07-26 13:29:44.881851] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:04.380 COMP_lvs0/lv0 00:30:04.380 13:29:44 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:04.380 13:29:44 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:30:04.380 13:29:44 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:04.380 13:29:44 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:04.380 13:29:44 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:04.380 13:29:44 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:04.380 13:29:44 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:04.639 13:29:45 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:04.899 [ 00:30:04.899 { 00:30:04.899 "name": "COMP_lvs0/lv0", 00:30:04.899 "aliases": [ 00:30:04.899 "05336a32-b5b0-553d-a557-d892068e53b2" 00:30:04.899 ], 00:30:04.899 "product_name": "compress", 00:30:04.899 "block_size": 512, 00:30:04.899 "num_blocks": 200704, 00:30:04.899 "uuid": "05336a32-b5b0-553d-a557-d892068e53b2", 00:30:04.899 "assigned_rate_limits": { 00:30:04.899 "rw_ios_per_sec": 0, 00:30:04.899 "rw_mbytes_per_sec": 0, 00:30:04.899 "r_mbytes_per_sec": 0, 00:30:04.899 "w_mbytes_per_sec": 0 00:30:04.899 }, 00:30:04.899 "claimed": false, 00:30:04.899 "zoned": false, 00:30:04.899 "supported_io_types": { 00:30:04.899 "read": true, 00:30:04.899 "write": true, 00:30:04.899 "unmap": false, 00:30:04.899 "flush": false, 00:30:04.899 "reset": false, 00:30:04.899 "nvme_admin": false, 00:30:04.899 "nvme_io": false, 00:30:04.899 "nvme_io_md": false, 00:30:04.899 "write_zeroes": true, 00:30:04.899 "zcopy": false, 00:30:04.899 "get_zone_info": false, 00:30:04.899 "zone_management": false, 00:30:04.899 "zone_append": false, 00:30:04.899 "compare": false, 00:30:04.899 "compare_and_write": false, 00:30:04.899 "abort": false, 00:30:04.899 "seek_hole": false, 00:30:04.899 "seek_data": false, 00:30:04.899 "copy": false, 00:30:04.899 "nvme_iov_md": false 00:30:04.899 }, 00:30:04.899 "driver_specific": { 00:30:04.899 "compress": { 00:30:04.899 "name": "COMP_lvs0/lv0", 00:30:04.899 "base_bdev_name": "98bd8f0a-6162-4dc5-b888-ac581118f709", 00:30:04.899 "pm_path": "/tmp/pmem/b8d19306-c0de-4a8e-81cd-c8fb028b62d5" 00:30:04.899 } 00:30:04.899 } 00:30:04.899 } 00:30:04.899 ] 00:30:04.899 13:29:45 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:04.899 13:29:45 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:05.158 Running I/O for 3 seconds... 00:30:08.487 00:30:08.487 Latency(us) 00:30:08.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.487 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:08.487 Verification LBA range: start 0x0 length 0x3100 00:30:08.487 COMP_lvs0/lv0 : 3.01 3498.86 13.67 0.00 0.00 9091.29 59.39 14470.35 00:30:08.487 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:08.487 Verification LBA range: start 0x3100 length 0x3100 00:30:08.487 COMP_lvs0/lv0 : 3.01 3498.10 13.66 0.00 0.00 9098.83 55.30 14050.92 00:30:08.487 =================================================================================================================== 00:30:08.487 Total : 6996.96 27.33 0.00 0.00 9095.06 55.30 14470.35 00:30:08.487 0 00:30:08.487 13:29:48 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:30:08.487 13:29:48 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:08.487 13:29:48 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:08.747 13:29:49 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:08.747 13:29:49 compress_isal -- compress/compress.sh@78 -- # killprocess 862609 00:30:08.747 13:29:49 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 862609 ']' 00:30:08.747 13:29:49 compress_isal -- common/autotest_common.sh@954 -- # kill -0 862609 00:30:08.747 13:29:49 compress_isal -- common/autotest_common.sh@955 -- # uname 00:30:08.747 13:29:49 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:08.747 13:29:49 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 862609 00:30:08.747 13:29:49 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:08.747 13:29:49 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:08.747 13:29:49 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 862609' 00:30:08.747 killing process with pid 862609 00:30:08.747 13:29:49 compress_isal -- common/autotest_common.sh@969 -- # kill 862609 00:30:08.747 Received shutdown signal, test time was about 3.000000 seconds 00:30:08.747 00:30:08.747 Latency(us) 00:30:08.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.747 =================================================================================================================== 00:30:08.747 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:08.747 13:29:49 compress_isal -- common/autotest_common.sh@974 -- # wait 862609 00:30:11.283 13:29:51 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:30:11.283 13:29:51 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:30:11.283 13:29:51 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=864916 00:30:11.283 13:29:51 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:11.283 13:29:51 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:30:11.283 13:29:51 compress_isal -- compress/compress.sh@73 -- # waitforlisten 864916 00:30:11.283 13:29:51 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 864916 ']' 00:30:11.283 13:29:51 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.283 13:29:51 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:11.283 13:29:51 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.283 13:29:51 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:11.283 13:29:51 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:30:11.283 [2024-07-26 13:29:51.507322] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:30:11.283 [2024-07-26 13:29:51.507384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid864916 ] 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:11.283 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.283 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:11.284 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:11.284 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:11.284 [2024-07-26 13:29:51.627715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:11.284 [2024-07-26 13:29:51.710641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.284 [2024-07-26 13:29:51.710655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.221 13:29:52 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:12.221 13:29:52 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:30:12.221 13:29:52 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:30:12.221 13:29:52 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:12.221 13:29:52 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:15.508 13:29:55 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:15.508 13:29:55 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:30:15.508 13:29:55 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:15.508 13:29:55 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:15.508 13:29:55 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:15.508 13:29:55 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:15.508 13:29:55 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:15.508 13:29:55 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:15.508 [ 00:30:15.508 { 00:30:15.508 "name": "Nvme0n1", 00:30:15.508 "aliases": [ 00:30:15.508 "d7cf15ac-95f6-47fc-846c-2394262eb605" 00:30:15.508 ], 00:30:15.508 "product_name": "NVMe disk", 00:30:15.508 "block_size": 512, 00:30:15.508 "num_blocks": 3907029168, 00:30:15.508 "uuid": "d7cf15ac-95f6-47fc-846c-2394262eb605", 00:30:15.508 "assigned_rate_limits": { 00:30:15.508 "rw_ios_per_sec": 0, 00:30:15.508 "rw_mbytes_per_sec": 0, 00:30:15.508 "r_mbytes_per_sec": 0, 00:30:15.508 "w_mbytes_per_sec": 0 00:30:15.508 }, 00:30:15.508 "claimed": false, 00:30:15.508 "zoned": false, 00:30:15.508 "supported_io_types": { 00:30:15.508 "read": true, 00:30:15.508 "write": true, 00:30:15.508 "unmap": true, 00:30:15.508 "flush": true, 00:30:15.508 "reset": true, 00:30:15.508 "nvme_admin": true, 00:30:15.508 "nvme_io": true, 00:30:15.508 "nvme_io_md": false, 00:30:15.508 "write_zeroes": true, 00:30:15.508 "zcopy": false, 00:30:15.508 "get_zone_info": false, 00:30:15.508 "zone_management": false, 00:30:15.508 "zone_append": false, 00:30:15.508 "compare": false, 00:30:15.508 "compare_and_write": false, 00:30:15.508 "abort": true, 00:30:15.508 "seek_hole": false, 00:30:15.508 "seek_data": false, 00:30:15.508 "copy": false, 00:30:15.508 "nvme_iov_md": false 00:30:15.508 }, 00:30:15.508 "driver_specific": { 00:30:15.508 "nvme": [ 00:30:15.508 { 00:30:15.508 "pci_address": "0000:d8:00.0", 00:30:15.508 "trid": { 00:30:15.508 "trtype": "PCIe", 00:30:15.508 "traddr": "0000:d8:00.0" 00:30:15.508 }, 00:30:15.508 "ctrlr_data": { 00:30:15.508 "cntlid": 0, 00:30:15.508 "vendor_id": "0x8086", 00:30:15.508 "model_number": "INTEL SSDPE2KX020T8", 00:30:15.508 "serial_number": "BTLJ125505KA2P0BGN", 00:30:15.508 "firmware_revision": "VDV10170", 00:30:15.508 "oacs": { 00:30:15.508 "security": 0, 00:30:15.508 "format": 1, 00:30:15.508 "firmware": 1, 00:30:15.508 "ns_manage": 1 00:30:15.508 }, 00:30:15.508 "multi_ctrlr": false, 00:30:15.508 "ana_reporting": false 00:30:15.508 }, 00:30:15.508 "vs": { 00:30:15.508 "nvme_version": "1.2" 00:30:15.508 }, 00:30:15.508 "ns_data": { 00:30:15.508 "id": 1, 00:30:15.508 "can_share": false 00:30:15.508 } 00:30:15.508 } 00:30:15.508 ], 00:30:15.508 "mp_policy": "active_passive" 00:30:15.508 } 00:30:15.508 } 00:30:15.508 ] 00:30:15.508 13:29:55 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:15.508 13:29:55 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:16.886 ef76e28a-6b75-42e4-8b38-9cf92307c1df 00:30:16.886 13:29:57 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:17.144 d36aaf32-2d8a-463e-8e5a-a1f41e3347c0 00:30:17.144 13:29:57 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:17.144 13:29:57 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:30:17.144 13:29:57 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:17.144 13:29:57 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:17.144 13:29:57 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:17.144 13:29:57 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:17.144 13:29:57 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:17.403 13:29:57 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:17.403 [ 00:30:17.403 { 00:30:17.403 "name": "d36aaf32-2d8a-463e-8e5a-a1f41e3347c0", 00:30:17.403 "aliases": [ 00:30:17.403 "lvs0/lv0" 00:30:17.403 ], 00:30:17.403 "product_name": "Logical Volume", 00:30:17.403 "block_size": 512, 00:30:17.403 "num_blocks": 204800, 00:30:17.403 "uuid": "d36aaf32-2d8a-463e-8e5a-a1f41e3347c0", 00:30:17.403 "assigned_rate_limits": { 00:30:17.403 "rw_ios_per_sec": 0, 00:30:17.403 "rw_mbytes_per_sec": 0, 00:30:17.403 "r_mbytes_per_sec": 0, 00:30:17.403 "w_mbytes_per_sec": 0 00:30:17.403 }, 00:30:17.403 "claimed": false, 00:30:17.403 "zoned": false, 00:30:17.403 "supported_io_types": { 00:30:17.403 "read": true, 00:30:17.403 "write": true, 00:30:17.403 "unmap": true, 00:30:17.403 "flush": false, 00:30:17.403 "reset": true, 00:30:17.403 "nvme_admin": false, 00:30:17.403 "nvme_io": false, 00:30:17.403 "nvme_io_md": false, 00:30:17.403 "write_zeroes": true, 00:30:17.403 "zcopy": false, 00:30:17.403 "get_zone_info": false, 00:30:17.403 "zone_management": false, 00:30:17.403 "zone_append": false, 00:30:17.403 "compare": false, 00:30:17.403 "compare_and_write": false, 00:30:17.403 "abort": false, 00:30:17.403 "seek_hole": true, 00:30:17.403 "seek_data": true, 00:30:17.403 "copy": false, 00:30:17.403 "nvme_iov_md": false 00:30:17.403 }, 00:30:17.403 "driver_specific": { 00:30:17.403 "lvol": { 00:30:17.403 "lvol_store_uuid": "ef76e28a-6b75-42e4-8b38-9cf92307c1df", 00:30:17.403 "base_bdev": "Nvme0n1", 00:30:17.403 "thin_provision": true, 00:30:17.403 "num_allocated_clusters": 0, 00:30:17.403 "snapshot": false, 00:30:17.403 "clone": false, 00:30:17.403 "esnap_clone": false 00:30:17.403 } 00:30:17.403 } 00:30:17.403 } 00:30:17.403 ] 00:30:17.403 13:29:57 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:17.403 13:29:57 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:30:17.403 13:29:57 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:30:17.662 [2024-07-26 13:29:58.052815] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:17.662 COMP_lvs0/lv0 00:30:17.662 13:29:58 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:17.662 13:29:58 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:30:17.662 13:29:58 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:17.662 13:29:58 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:17.662 13:29:58 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:17.662 13:29:58 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:17.662 13:29:58 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:17.921 13:29:58 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:18.180 [ 00:30:18.180 { 00:30:18.180 "name": "COMP_lvs0/lv0", 00:30:18.180 "aliases": [ 00:30:18.180 "a0fa7f4e-4e15-55bd-9976-80b5aa600188" 00:30:18.180 ], 00:30:18.180 "product_name": "compress", 00:30:18.180 "block_size": 512, 00:30:18.180 "num_blocks": 200704, 00:30:18.180 "uuid": "a0fa7f4e-4e15-55bd-9976-80b5aa600188", 00:30:18.180 "assigned_rate_limits": { 00:30:18.180 "rw_ios_per_sec": 0, 00:30:18.180 "rw_mbytes_per_sec": 0, 00:30:18.180 "r_mbytes_per_sec": 0, 00:30:18.180 "w_mbytes_per_sec": 0 00:30:18.180 }, 00:30:18.180 "claimed": false, 00:30:18.180 "zoned": false, 00:30:18.180 "supported_io_types": { 00:30:18.180 "read": true, 00:30:18.180 "write": true, 00:30:18.180 "unmap": false, 00:30:18.180 "flush": false, 00:30:18.180 "reset": false, 00:30:18.180 "nvme_admin": false, 00:30:18.180 "nvme_io": false, 00:30:18.180 "nvme_io_md": false, 00:30:18.180 "write_zeroes": true, 00:30:18.180 "zcopy": false, 00:30:18.180 "get_zone_info": false, 00:30:18.180 "zone_management": false, 00:30:18.180 "zone_append": false, 00:30:18.180 "compare": false, 00:30:18.180 "compare_and_write": false, 00:30:18.180 "abort": false, 00:30:18.180 "seek_hole": false, 00:30:18.180 "seek_data": false, 00:30:18.180 "copy": false, 00:30:18.180 "nvme_iov_md": false 00:30:18.180 }, 00:30:18.180 "driver_specific": { 00:30:18.180 "compress": { 00:30:18.180 "name": "COMP_lvs0/lv0", 00:30:18.180 "base_bdev_name": "d36aaf32-2d8a-463e-8e5a-a1f41e3347c0", 00:30:18.180 "pm_path": "/tmp/pmem/cd703835-2e0e-40d8-811c-95da38c3fe38" 00:30:18.180 } 00:30:18.180 } 00:30:18.180 } 00:30:18.180 ] 00:30:18.180 13:29:58 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:18.180 13:29:58 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:18.180 Running I/O for 3 seconds... 00:30:21.468 00:30:21.468 Latency(us) 00:30:21.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.468 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:21.468 Verification LBA range: start 0x0 length 0x3100 00:30:21.468 COMP_lvs0/lv0 : 3.01 3470.13 13.56 0.00 0.00 9157.09 59.39 14575.21 00:30:21.468 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:21.468 Verification LBA range: start 0x3100 length 0x3100 00:30:21.468 COMP_lvs0/lv0 : 3.01 3498.88 13.67 0.00 0.00 9086.59 54.89 14784.92 00:30:21.468 =================================================================================================================== 00:30:21.468 Total : 6969.00 27.22 0.00 0.00 9121.68 54.89 14784.92 00:30:21.468 0 00:30:21.468 13:30:01 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:30:21.468 13:30:01 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:21.468 13:30:01 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:21.727 13:30:02 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:21.727 13:30:02 compress_isal -- compress/compress.sh@78 -- # killprocess 864916 00:30:21.727 13:30:02 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 864916 ']' 00:30:21.727 13:30:02 compress_isal -- common/autotest_common.sh@954 -- # kill -0 864916 00:30:21.727 13:30:02 compress_isal -- common/autotest_common.sh@955 -- # uname 00:30:21.727 13:30:02 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:21.727 13:30:02 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 864916 00:30:21.727 13:30:02 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:21.727 13:30:02 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:21.727 13:30:02 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 864916' 00:30:21.727 killing process with pid 864916 00:30:21.727 13:30:02 compress_isal -- common/autotest_common.sh@969 -- # kill 864916 00:30:21.727 Received shutdown signal, test time was about 3.000000 seconds 00:30:21.727 00:30:21.727 Latency(us) 00:30:21.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:21.727 =================================================================================================================== 00:30:21.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:21.727 13:30:02 compress_isal -- common/autotest_common.sh@974 -- # wait 864916 00:30:24.259 13:30:04 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:30:24.259 13:30:04 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:30:24.259 13:30:04 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=867464 00:30:24.259 13:30:04 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:24.259 13:30:04 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:30:24.259 13:30:04 compress_isal -- compress/compress.sh@73 -- # waitforlisten 867464 00:30:24.259 13:30:04 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 867464 ']' 00:30:24.259 13:30:04 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.259 13:30:04 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:24.259 13:30:04 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.259 13:30:04 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:24.259 13:30:04 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:30:24.259 [2024-07-26 13:30:04.683948] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:30:24.259 [2024-07-26 13:30:04.684012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867464 ] 00:30:24.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.259 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:24.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.259 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:24.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.259 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:24.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.259 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:24.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.259 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:24.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.259 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:24.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:24.260 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:24.520 [2024-07-26 13:30:04.804432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:24.520 [2024-07-26 13:30:04.891364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:24.520 [2024-07-26 13:30:04.891371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.088 13:30:05 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:25.088 13:30:05 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:30:25.088 13:30:05 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:30:25.088 13:30:05 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:25.088 13:30:05 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:28.377 13:30:08 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:28.377 13:30:08 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:30:28.377 13:30:08 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:28.377 13:30:08 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:28.377 13:30:08 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:28.377 13:30:08 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:28.377 13:30:08 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:28.377 13:30:08 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:28.680 [ 00:30:28.680 { 00:30:28.680 "name": "Nvme0n1", 00:30:28.680 "aliases": [ 00:30:28.680 "2457ea69-178a-45e9-b0e9-1967552c5e4e" 00:30:28.680 ], 00:30:28.680 "product_name": "NVMe disk", 00:30:28.680 "block_size": 512, 00:30:28.680 "num_blocks": 3907029168, 00:30:28.680 "uuid": "2457ea69-178a-45e9-b0e9-1967552c5e4e", 00:30:28.680 "assigned_rate_limits": { 00:30:28.680 "rw_ios_per_sec": 0, 00:30:28.680 "rw_mbytes_per_sec": 0, 00:30:28.680 "r_mbytes_per_sec": 0, 00:30:28.680 "w_mbytes_per_sec": 0 00:30:28.680 }, 00:30:28.680 "claimed": false, 00:30:28.680 "zoned": false, 00:30:28.680 "supported_io_types": { 00:30:28.680 "read": true, 00:30:28.680 "write": true, 00:30:28.680 "unmap": true, 00:30:28.680 "flush": true, 00:30:28.680 "reset": true, 00:30:28.680 "nvme_admin": true, 00:30:28.680 "nvme_io": true, 00:30:28.680 "nvme_io_md": false, 00:30:28.680 "write_zeroes": true, 00:30:28.680 "zcopy": false, 00:30:28.680 "get_zone_info": false, 00:30:28.680 "zone_management": false, 00:30:28.680 "zone_append": false, 00:30:28.680 "compare": false, 00:30:28.680 "compare_and_write": false, 00:30:28.680 "abort": true, 00:30:28.680 "seek_hole": false, 00:30:28.680 "seek_data": false, 00:30:28.680 "copy": false, 00:30:28.680 "nvme_iov_md": false 00:30:28.680 }, 00:30:28.680 "driver_specific": { 00:30:28.680 "nvme": [ 00:30:28.680 { 00:30:28.680 "pci_address": "0000:d8:00.0", 00:30:28.680 "trid": { 00:30:28.680 "trtype": "PCIe", 00:30:28.680 "traddr": "0000:d8:00.0" 00:30:28.680 }, 00:30:28.680 "ctrlr_data": { 00:30:28.680 "cntlid": 0, 00:30:28.680 "vendor_id": "0x8086", 00:30:28.680 "model_number": "INTEL SSDPE2KX020T8", 00:30:28.680 "serial_number": "BTLJ125505KA2P0BGN", 00:30:28.680 "firmware_revision": "VDV10170", 00:30:28.680 "oacs": { 00:30:28.680 "security": 0, 00:30:28.680 "format": 1, 00:30:28.680 "firmware": 1, 00:30:28.680 "ns_manage": 1 00:30:28.680 }, 00:30:28.680 "multi_ctrlr": false, 00:30:28.680 "ana_reporting": false 00:30:28.680 }, 00:30:28.680 "vs": { 00:30:28.680 "nvme_version": "1.2" 00:30:28.680 }, 00:30:28.680 "ns_data": { 00:30:28.680 "id": 1, 00:30:28.680 "can_share": false 00:30:28.680 } 00:30:28.680 } 00:30:28.680 ], 00:30:28.680 "mp_policy": "active_passive" 00:30:28.680 } 00:30:28.680 } 00:30:28.680 ] 00:30:28.680 13:30:09 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:28.680 13:30:09 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:30.064 aa7374ed-51bb-4de8-b70e-e481ac7b5cc8 00:30:30.064 13:30:10 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:30.064 4fba4f1f-66dc-4352-8bc2-dca56e86b318 00:30:30.064 13:30:10 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:30.064 13:30:10 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:30:30.064 13:30:10 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:30.064 13:30:10 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:30.064 13:30:10 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:30.064 13:30:10 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:30.064 13:30:10 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:30.323 13:30:10 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:30.583 [ 00:30:30.583 { 00:30:30.583 "name": "4fba4f1f-66dc-4352-8bc2-dca56e86b318", 00:30:30.583 "aliases": [ 00:30:30.583 "lvs0/lv0" 00:30:30.583 ], 00:30:30.583 "product_name": "Logical Volume", 00:30:30.583 "block_size": 512, 00:30:30.583 "num_blocks": 204800, 00:30:30.583 "uuid": "4fba4f1f-66dc-4352-8bc2-dca56e86b318", 00:30:30.583 "assigned_rate_limits": { 00:30:30.583 "rw_ios_per_sec": 0, 00:30:30.583 "rw_mbytes_per_sec": 0, 00:30:30.583 "r_mbytes_per_sec": 0, 00:30:30.583 "w_mbytes_per_sec": 0 00:30:30.583 }, 00:30:30.583 "claimed": false, 00:30:30.583 "zoned": false, 00:30:30.583 "supported_io_types": { 00:30:30.583 "read": true, 00:30:30.583 "write": true, 00:30:30.583 "unmap": true, 00:30:30.583 "flush": false, 00:30:30.583 "reset": true, 00:30:30.583 "nvme_admin": false, 00:30:30.583 "nvme_io": false, 00:30:30.583 "nvme_io_md": false, 00:30:30.583 "write_zeroes": true, 00:30:30.583 "zcopy": false, 00:30:30.583 "get_zone_info": false, 00:30:30.583 "zone_management": false, 00:30:30.583 "zone_append": false, 00:30:30.583 "compare": false, 00:30:30.583 "compare_and_write": false, 00:30:30.583 "abort": false, 00:30:30.583 "seek_hole": true, 00:30:30.583 "seek_data": true, 00:30:30.583 "copy": false, 00:30:30.583 "nvme_iov_md": false 00:30:30.583 }, 00:30:30.583 "driver_specific": { 00:30:30.583 "lvol": { 00:30:30.583 "lvol_store_uuid": "aa7374ed-51bb-4de8-b70e-e481ac7b5cc8", 00:30:30.583 "base_bdev": "Nvme0n1", 00:30:30.583 "thin_provision": true, 00:30:30.583 "num_allocated_clusters": 0, 00:30:30.583 "snapshot": false, 00:30:30.583 "clone": false, 00:30:30.583 "esnap_clone": false 00:30:30.583 } 00:30:30.583 } 00:30:30.583 } 00:30:30.583 ] 00:30:30.583 13:30:10 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:30.583 13:30:10 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:30:30.583 13:30:10 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:30:30.842 [2024-07-26 13:30:11.210193] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:30.842 COMP_lvs0/lv0 00:30:30.842 13:30:11 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:30.842 13:30:11 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:30:30.842 13:30:11 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:30.842 13:30:11 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:30.842 13:30:11 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:30.842 13:30:11 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:30.842 13:30:11 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:31.100 13:30:11 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:31.359 [ 00:30:31.359 { 00:30:31.359 "name": "COMP_lvs0/lv0", 00:30:31.359 "aliases": [ 00:30:31.359 "e35a2d11-6209-5da1-a581-853d070d28a8" 00:30:31.359 ], 00:30:31.359 "product_name": "compress", 00:30:31.359 "block_size": 4096, 00:30:31.359 "num_blocks": 25088, 00:30:31.359 "uuid": "e35a2d11-6209-5da1-a581-853d070d28a8", 00:30:31.359 "assigned_rate_limits": { 00:30:31.359 "rw_ios_per_sec": 0, 00:30:31.359 "rw_mbytes_per_sec": 0, 00:30:31.359 "r_mbytes_per_sec": 0, 00:30:31.359 "w_mbytes_per_sec": 0 00:30:31.359 }, 00:30:31.359 "claimed": false, 00:30:31.359 "zoned": false, 00:30:31.359 "supported_io_types": { 00:30:31.359 "read": true, 00:30:31.359 "write": true, 00:30:31.359 "unmap": false, 00:30:31.359 "flush": false, 00:30:31.359 "reset": false, 00:30:31.359 "nvme_admin": false, 00:30:31.359 "nvme_io": false, 00:30:31.359 "nvme_io_md": false, 00:30:31.359 "write_zeroes": true, 00:30:31.359 "zcopy": false, 00:30:31.359 "get_zone_info": false, 00:30:31.359 "zone_management": false, 00:30:31.359 "zone_append": false, 00:30:31.359 "compare": false, 00:30:31.359 "compare_and_write": false, 00:30:31.359 "abort": false, 00:30:31.359 "seek_hole": false, 00:30:31.359 "seek_data": false, 00:30:31.359 "copy": false, 00:30:31.359 "nvme_iov_md": false 00:30:31.359 }, 00:30:31.359 "driver_specific": { 00:30:31.359 "compress": { 00:30:31.359 "name": "COMP_lvs0/lv0", 00:30:31.359 "base_bdev_name": "4fba4f1f-66dc-4352-8bc2-dca56e86b318", 00:30:31.360 "pm_path": "/tmp/pmem/8d475a9a-635c-401b-899f-effcf4fc3fd6" 00:30:31.360 } 00:30:31.360 } 00:30:31.360 } 00:30:31.360 ] 00:30:31.360 13:30:11 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:31.360 13:30:11 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:31.360 Running I/O for 3 seconds... 00:30:34.647 00:30:34.647 Latency(us) 00:30:34.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.647 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:30:34.647 Verification LBA range: start 0x0 length 0x3100 00:30:34.647 COMP_lvs0/lv0 : 3.01 3471.64 13.56 0.00 0.00 9162.84 58.98 17091.79 00:30:34.647 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:30:34.647 Verification LBA range: start 0x3100 length 0x3100 00:30:34.647 COMP_lvs0/lv0 : 3.01 3511.35 13.72 0.00 0.00 9061.41 57.34 16462.64 00:30:34.647 =================================================================================================================== 00:30:34.647 Total : 6982.98 27.28 0.00 0.00 9111.80 57.34 17091.79 00:30:34.647 0 00:30:34.647 13:30:14 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:30:34.647 13:30:14 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:34.647 13:30:15 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:34.906 13:30:15 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:30:34.906 13:30:15 compress_isal -- compress/compress.sh@78 -- # killprocess 867464 00:30:34.906 13:30:15 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 867464 ']' 00:30:34.906 13:30:15 compress_isal -- common/autotest_common.sh@954 -- # kill -0 867464 00:30:34.906 13:30:15 compress_isal -- common/autotest_common.sh@955 -- # uname 00:30:34.906 13:30:15 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:34.906 13:30:15 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 867464 00:30:34.906 13:30:15 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:34.906 13:30:15 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:34.906 13:30:15 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 867464' 00:30:34.906 killing process with pid 867464 00:30:34.906 13:30:15 compress_isal -- common/autotest_common.sh@969 -- # kill 867464 00:30:34.906 Received shutdown signal, test time was about 3.000000 seconds 00:30:34.906 00:30:34.906 Latency(us) 00:30:34.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.906 =================================================================================================================== 00:30:34.906 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:34.906 13:30:15 compress_isal -- common/autotest_common.sh@974 -- # wait 867464 00:30:37.440 13:30:17 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:30:37.440 13:30:17 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:30:37.440 13:30:17 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=869804 00:30:37.440 13:30:17 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:37.440 13:30:17 compress_isal -- compress/compress.sh@57 -- # waitforlisten 869804 00:30:37.440 13:30:17 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 869804 ']' 00:30:37.440 13:30:17 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.440 13:30:17 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:37.441 13:30:17 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.441 13:30:17 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:37.441 13:30:17 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:30:37.441 13:30:17 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:30:37.441 [2024-07-26 13:30:17.894881] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:30:37.441 [2024-07-26 13:30:17.895012] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869804 ] 00:30:37.699 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.699 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:37.700 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:37.700 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:37.700 [2024-07-26 13:30:18.098993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:37.700 [2024-07-26 13:30:18.183517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.700 [2024-07-26 13:30:18.183611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:37.700 [2024-07-26 13:30:18.183615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.268 13:30:18 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:38.268 13:30:18 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:30:38.268 13:30:18 compress_isal -- compress/compress.sh@58 -- # create_vols 00:30:38.268 13:30:18 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:38.268 13:30:18 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:41.559 13:30:21 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:30:41.559 13:30:21 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:30:41.559 13:30:21 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:41.559 13:30:21 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:41.559 13:30:21 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:41.559 13:30:21 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:41.559 13:30:21 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:41.819 13:30:22 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:30:41.819 [ 00:30:41.819 { 00:30:41.819 "name": "Nvme0n1", 00:30:41.819 "aliases": [ 00:30:41.819 "bad68f12-2a01-4377-825d-31bf7bf6fce0" 00:30:41.819 ], 00:30:41.819 "product_name": "NVMe disk", 00:30:41.819 "block_size": 512, 00:30:41.819 "num_blocks": 3907029168, 00:30:41.819 "uuid": "bad68f12-2a01-4377-825d-31bf7bf6fce0", 00:30:41.819 "assigned_rate_limits": { 00:30:41.819 "rw_ios_per_sec": 0, 00:30:41.819 "rw_mbytes_per_sec": 0, 00:30:41.819 "r_mbytes_per_sec": 0, 00:30:41.819 "w_mbytes_per_sec": 0 00:30:41.819 }, 00:30:41.819 "claimed": false, 00:30:41.819 "zoned": false, 00:30:41.819 "supported_io_types": { 00:30:41.819 "read": true, 00:30:41.819 "write": true, 00:30:41.819 "unmap": true, 00:30:41.819 "flush": true, 00:30:41.819 "reset": true, 00:30:41.819 "nvme_admin": true, 00:30:41.819 "nvme_io": true, 00:30:41.819 "nvme_io_md": false, 00:30:41.819 "write_zeroes": true, 00:30:41.819 "zcopy": false, 00:30:41.819 "get_zone_info": false, 00:30:41.819 "zone_management": false, 00:30:41.819 "zone_append": false, 00:30:41.819 "compare": false, 00:30:41.819 "compare_and_write": false, 00:30:41.819 "abort": true, 00:30:41.819 "seek_hole": false, 00:30:41.819 "seek_data": false, 00:30:41.819 "copy": false, 00:30:41.819 "nvme_iov_md": false 00:30:41.819 }, 00:30:41.819 "driver_specific": { 00:30:41.819 "nvme": [ 00:30:41.819 { 00:30:41.819 "pci_address": "0000:d8:00.0", 00:30:41.819 "trid": { 00:30:41.819 "trtype": "PCIe", 00:30:41.819 "traddr": "0000:d8:00.0" 00:30:41.819 }, 00:30:41.819 "ctrlr_data": { 00:30:41.819 "cntlid": 0, 00:30:41.819 "vendor_id": "0x8086", 00:30:41.819 "model_number": "INTEL SSDPE2KX020T8", 00:30:41.819 "serial_number": "BTLJ125505KA2P0BGN", 00:30:41.819 "firmware_revision": "VDV10170", 00:30:41.819 "oacs": { 00:30:41.819 "security": 0, 00:30:41.819 "format": 1, 00:30:41.819 "firmware": 1, 00:30:41.819 "ns_manage": 1 00:30:41.819 }, 00:30:41.819 "multi_ctrlr": false, 00:30:41.819 "ana_reporting": false 00:30:41.819 }, 00:30:41.819 "vs": { 00:30:41.819 "nvme_version": "1.2" 00:30:41.819 }, 00:30:41.819 "ns_data": { 00:30:41.819 "id": 1, 00:30:41.819 "can_share": false 00:30:41.819 } 00:30:41.819 } 00:30:41.819 ], 00:30:41.819 "mp_policy": "active_passive" 00:30:41.819 } 00:30:41.819 } 00:30:41.819 ] 00:30:41.819 13:30:22 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:41.819 13:30:22 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:30:43.199 189a3754-8f98-4840-93e0-0ba924841bcd 00:30:43.199 13:30:23 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:30:43.457 f8992267-d7ab-451a-a980-82afb64d9e8c 00:30:43.457 13:30:23 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:30:43.457 13:30:23 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:30:43.457 13:30:23 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:43.457 13:30:23 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:43.457 13:30:23 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:43.457 13:30:23 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:43.457 13:30:23 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:43.714 13:30:24 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:30:43.973 [ 00:30:43.973 { 00:30:43.973 "name": "f8992267-d7ab-451a-a980-82afb64d9e8c", 00:30:43.973 "aliases": [ 00:30:43.973 "lvs0/lv0" 00:30:43.973 ], 00:30:43.973 "product_name": "Logical Volume", 00:30:43.973 "block_size": 512, 00:30:43.973 "num_blocks": 204800, 00:30:43.973 "uuid": "f8992267-d7ab-451a-a980-82afb64d9e8c", 00:30:43.973 "assigned_rate_limits": { 00:30:43.973 "rw_ios_per_sec": 0, 00:30:43.973 "rw_mbytes_per_sec": 0, 00:30:43.973 "r_mbytes_per_sec": 0, 00:30:43.973 "w_mbytes_per_sec": 0 00:30:43.973 }, 00:30:43.973 "claimed": false, 00:30:43.973 "zoned": false, 00:30:43.973 "supported_io_types": { 00:30:43.973 "read": true, 00:30:43.973 "write": true, 00:30:43.973 "unmap": true, 00:30:43.973 "flush": false, 00:30:43.973 "reset": true, 00:30:43.973 "nvme_admin": false, 00:30:43.973 "nvme_io": false, 00:30:43.973 "nvme_io_md": false, 00:30:43.973 "write_zeroes": true, 00:30:43.973 "zcopy": false, 00:30:43.973 "get_zone_info": false, 00:30:43.973 "zone_management": false, 00:30:43.973 "zone_append": false, 00:30:43.973 "compare": false, 00:30:43.973 "compare_and_write": false, 00:30:43.973 "abort": false, 00:30:43.973 "seek_hole": true, 00:30:43.973 "seek_data": true, 00:30:43.973 "copy": false, 00:30:43.973 "nvme_iov_md": false 00:30:43.973 }, 00:30:43.973 "driver_specific": { 00:30:43.973 "lvol": { 00:30:43.973 "lvol_store_uuid": "189a3754-8f98-4840-93e0-0ba924841bcd", 00:30:43.973 "base_bdev": "Nvme0n1", 00:30:43.973 "thin_provision": true, 00:30:43.973 "num_allocated_clusters": 0, 00:30:43.973 "snapshot": false, 00:30:43.973 "clone": false, 00:30:43.973 "esnap_clone": false 00:30:43.973 } 00:30:43.973 } 00:30:43.973 } 00:30:43.973 ] 00:30:43.973 13:30:24 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:43.973 13:30:24 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:30:43.973 13:30:24 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:30:44.232 [2024-07-26 13:30:24.525085] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:30:44.232 COMP_lvs0/lv0 00:30:44.232 13:30:24 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:30:44.232 13:30:24 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:30:44.232 13:30:24 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:44.232 13:30:24 compress_isal -- common/autotest_common.sh@901 -- # local i 00:30:44.232 13:30:24 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:44.232 13:30:24 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:44.232 13:30:24 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:44.232 13:30:24 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:30:44.491 [ 00:30:44.491 { 00:30:44.491 "name": "COMP_lvs0/lv0", 00:30:44.491 "aliases": [ 00:30:44.491 "3576fa34-fbe3-5b03-938d-b3101edd9062" 00:30:44.491 ], 00:30:44.491 "product_name": "compress", 00:30:44.491 "block_size": 512, 00:30:44.491 "num_blocks": 200704, 00:30:44.491 "uuid": "3576fa34-fbe3-5b03-938d-b3101edd9062", 00:30:44.491 "assigned_rate_limits": { 00:30:44.491 "rw_ios_per_sec": 0, 00:30:44.491 "rw_mbytes_per_sec": 0, 00:30:44.491 "r_mbytes_per_sec": 0, 00:30:44.491 "w_mbytes_per_sec": 0 00:30:44.491 }, 00:30:44.491 "claimed": false, 00:30:44.491 "zoned": false, 00:30:44.491 "supported_io_types": { 00:30:44.491 "read": true, 00:30:44.491 "write": true, 00:30:44.491 "unmap": false, 00:30:44.491 "flush": false, 00:30:44.491 "reset": false, 00:30:44.491 "nvme_admin": false, 00:30:44.491 "nvme_io": false, 00:30:44.491 "nvme_io_md": false, 00:30:44.491 "write_zeroes": true, 00:30:44.491 "zcopy": false, 00:30:44.491 "get_zone_info": false, 00:30:44.491 "zone_management": false, 00:30:44.491 "zone_append": false, 00:30:44.491 "compare": false, 00:30:44.491 "compare_and_write": false, 00:30:44.491 "abort": false, 00:30:44.491 "seek_hole": false, 00:30:44.491 "seek_data": false, 00:30:44.491 "copy": false, 00:30:44.491 "nvme_iov_md": false 00:30:44.491 }, 00:30:44.491 "driver_specific": { 00:30:44.491 "compress": { 00:30:44.491 "name": "COMP_lvs0/lv0", 00:30:44.491 "base_bdev_name": "f8992267-d7ab-451a-a980-82afb64d9e8c", 00:30:44.491 "pm_path": "/tmp/pmem/d93db305-8ca4-4661-a845-ebd88c21340f" 00:30:44.491 } 00:30:44.491 } 00:30:44.491 } 00:30:44.491 ] 00:30:44.491 13:30:24 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:30:44.491 13:30:24 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:44.750 I/O targets: 00:30:44.750 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:30:44.750 00:30:44.750 00:30:44.750 CUnit - A unit testing framework for C - Version 2.1-3 00:30:44.750 http://cunit.sourceforge.net/ 00:30:44.750 00:30:44.750 00:30:44.750 Suite: bdevio tests on: COMP_lvs0/lv0 00:30:44.750 Test: blockdev write read block ...passed 00:30:44.750 Test: blockdev write zeroes read block ...passed 00:30:44.750 Test: blockdev write zeroes read no split ...passed 00:30:44.750 Test: blockdev write zeroes read split ...passed 00:30:44.750 Test: blockdev write zeroes read split partial ...passed 00:30:44.750 Test: blockdev reset ...[2024-07-26 13:30:25.127185] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:30:44.750 passed 00:30:44.750 Test: blockdev write read 8 blocks ...passed 00:30:44.750 Test: blockdev write read size > 128k ...passed 00:30:44.750 Test: blockdev write read invalid size ...passed 00:30:44.750 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:44.750 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:44.750 Test: blockdev write read max offset ...passed 00:30:44.750 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:44.750 Test: blockdev writev readv 8 blocks ...passed 00:30:44.750 Test: blockdev writev readv 30 x 1block ...passed 00:30:44.750 Test: blockdev writev readv block ...passed 00:30:44.750 Test: blockdev writev readv size > 128k ...passed 00:30:44.750 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:44.750 Test: blockdev comparev and writev ...passed 00:30:44.750 Test: blockdev nvme passthru rw ...passed 00:30:44.750 Test: blockdev nvme passthru vendor specific ...passed 00:30:44.750 Test: blockdev nvme admin passthru ...passed 00:30:44.750 Test: blockdev copy ...passed 00:30:44.750 00:30:44.750 Run Summary: Type Total Ran Passed Failed Inactive 00:30:44.750 suites 1 1 n/a 0 0 00:30:44.750 tests 23 23 23 0 0 00:30:44.750 asserts 130 130 130 0 n/a 00:30:44.750 00:30:44.750 Elapsed time = 0.190 seconds 00:30:44.750 0 00:30:44.750 13:30:25 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:30:44.750 13:30:25 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:30:45.318 13:30:25 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:30:45.576 13:30:25 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:30:45.576 13:30:25 compress_isal -- compress/compress.sh@62 -- # killprocess 869804 00:30:45.576 13:30:25 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 869804 ']' 00:30:45.576 13:30:25 compress_isal -- common/autotest_common.sh@954 -- # kill -0 869804 00:30:45.576 13:30:25 compress_isal -- common/autotest_common.sh@955 -- # uname 00:30:45.576 13:30:25 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:45.576 13:30:25 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 869804 00:30:45.576 13:30:25 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:45.576 13:30:25 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:45.576 13:30:25 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 869804' 00:30:45.576 killing process with pid 869804 00:30:45.576 13:30:25 compress_isal -- common/autotest_common.sh@969 -- # kill 869804 00:30:45.576 13:30:25 compress_isal -- common/autotest_common.sh@974 -- # wait 869804 00:30:48.158 13:30:28 compress_isal -- compress/compress.sh@91 -- # '[' 0 -eq 1 ']' 00:30:48.158 13:30:28 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:30:48.158 00:30:48.158 real 0m50.616s 00:30:48.158 user 1m56.134s 00:30:48.158 sys 0m4.161s 00:30:48.158 13:30:28 compress_isal -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:48.158 13:30:28 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:30:48.158 ************************************ 00:30:48.158 END TEST compress_isal 00:30:48.158 ************************************ 00:30:48.158 13:30:28 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:48.158 13:30:28 -- spdk/autotest.sh@360 -- # '[' 1 -eq 1 ']' 00:30:48.158 13:30:28 -- spdk/autotest.sh@361 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:30:48.158 13:30:28 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:48.158 13:30:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:48.158 13:30:28 -- common/autotest_common.sh@10 -- # set +x 00:30:48.158 ************************************ 00:30:48.158 START TEST blockdev_crypto_aesni 00:30:48.158 ************************************ 00:30:48.158 13:30:28 blockdev_crypto_aesni -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:30:48.158 * Looking for test storage... 00:30:48.158 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # uname -s 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@681 -- # test_type=crypto_aesni 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # crypto_device= 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # dek= 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # env_ctx= 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == bdev ]] 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == crypto_* ]] 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=871694 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:30:48.158 13:30:28 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 871694 00:30:48.158 13:30:28 blockdev_crypto_aesni -- common/autotest_common.sh@831 -- # '[' -z 871694 ']' 00:30:48.158 13:30:28 blockdev_crypto_aesni -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:48.158 13:30:28 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:48.158 13:30:28 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:48.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:48.158 13:30:28 blockdev_crypto_aesni -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:48.158 13:30:28 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:48.447 [2024-07-26 13:30:28.692024] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:30:48.447 [2024-07-26 13:30:28.692087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871694 ] 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:48.447 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:48.447 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:48.447 [2024-07-26 13:30:28.826198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.447 [2024-07-26 13:30:28.909769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.384 13:30:29 blockdev_crypto_aesni -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:49.384 13:30:29 blockdev_crypto_aesni -- common/autotest_common.sh@864 -- # return 0 00:30:49.384 13:30:29 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:30:49.384 13:30:29 blockdev_crypto_aesni -- bdev/blockdev.sh@704 -- # setup_crypto_aesni_conf 00:30:49.384 13:30:29 blockdev_crypto_aesni -- bdev/blockdev.sh@145 -- # rpc_cmd 00:30:49.384 13:30:29 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.384 13:30:29 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:49.384 [2024-07-26 13:30:29.595900] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:30:49.384 [2024-07-26 13:30:29.603932] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:49.384 [2024-07-26 13:30:29.611949] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:49.384 [2024-07-26 13:30:29.679313] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:30:51.922 true 00:30:51.922 true 00:30:51.922 true 00:30:51.922 true 00:30:51.922 Malloc0 00:30:51.922 Malloc1 00:30:51.922 Malloc2 00:30:51.922 Malloc3 00:30:51.922 [2024-07-26 13:30:32.007166] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:30:51.922 crypto_ram 00:30:51.922 [2024-07-26 13:30:32.015187] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:30:51.922 crypto_ram2 00:30:51.922 [2024-07-26 13:30:32.023205] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:30:51.922 crypto_ram3 00:30:51.922 [2024-07-26 13:30:32.031226] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:30:51.922 crypto_ram4 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # cat 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "591a6185-10b6-5df2-9399-f0b159f22e75"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "591a6185-10b6-5df2-9399-f0b159f22e75",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "5927bd99-75c6-5f26-b740-583792bae2f9"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5927bd99-75c6-5f26-b740-583792bae2f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "d51e6e74-a821-53a7-8cea-73408fb34d0f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "d51e6e74-a821-53a7-8cea-73408fb34d0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "4a65d890-317d-5b23-9ae9-54f6eb51b5c4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "4a65d890-317d-5b23-9ae9-54f6eb51b5c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r .name 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:30:51.922 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # killprocess 871694 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@950 -- # '[' -z 871694 ']' 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # kill -0 871694 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # uname 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 871694 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@968 -- # echo 'killing process with pid 871694' 00:30:51.922 killing process with pid 871694 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@969 -- # kill 871694 00:30:51.922 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@974 -- # wait 871694 00:30:52.492 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:52.492 13:30:32 blockdev_crypto_aesni -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:30:52.492 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:52.492 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:52.492 13:30:32 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:52.492 ************************************ 00:30:52.492 START TEST bdev_hello_world 00:30:52.492 ************************************ 00:30:52.492 13:30:32 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:30:52.492 [2024-07-26 13:30:32.877365] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:30:52.492 [2024-07-26 13:30:32.877420] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872286 ] 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:52.492 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:52.492 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:52.492 [2024-07-26 13:30:33.012165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.751 [2024-07-26 13:30:33.098459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.751 [2024-07-26 13:30:33.119683] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:30:52.751 [2024-07-26 13:30:33.127710] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:52.751 [2024-07-26 13:30:33.135730] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:52.751 [2024-07-26 13:30:33.245285] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:30:55.286 [2024-07-26 13:30:35.419759] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:30:55.286 [2024-07-26 13:30:35.419839] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:55.286 [2024-07-26 13:30:35.419853] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:55.286 [2024-07-26 13:30:35.427791] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:30:55.286 [2024-07-26 13:30:35.427809] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:55.286 [2024-07-26 13:30:35.427819] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:55.286 [2024-07-26 13:30:35.435799] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:30:55.286 [2024-07-26 13:30:35.435815] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:30:55.286 [2024-07-26 13:30:35.435826] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:55.286 [2024-07-26 13:30:35.443818] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:30:55.286 [2024-07-26 13:30:35.443838] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:30:55.286 [2024-07-26 13:30:35.443849] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:55.286 [2024-07-26 13:30:35.515462] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:30:55.286 [2024-07-26 13:30:35.515503] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:30:55.286 [2024-07-26 13:30:35.515519] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:30:55.286 [2024-07-26 13:30:35.516688] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:30:55.286 [2024-07-26 13:30:35.516753] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:30:55.286 [2024-07-26 13:30:35.516769] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:30:55.286 [2024-07-26 13:30:35.516809] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:30:55.286 00:30:55.286 [2024-07-26 13:30:35.516827] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:30:55.546 00:30:55.546 real 0m3.019s 00:30:55.546 user 0m2.673s 00:30:55.546 sys 0m0.309s 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 ************************************ 00:30:55.546 END TEST bdev_hello_world 00:30:55.546 ************************************ 00:30:55.546 13:30:35 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:30:55.546 13:30:35 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:55.546 13:30:35 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:55.546 13:30:35 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 ************************************ 00:30:55.546 START TEST bdev_bounds 00:30:55.546 ************************************ 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=872831 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 872831' 00:30:55.546 Process bdevio pid: 872831 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 872831 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 872831 ']' 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:55.546 13:30:35 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:55.546 [2024-07-26 13:30:35.982382] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:30:55.546 [2024-07-26 13:30:35.982436] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid872831 ] 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:55.546 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:55.546 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:55.806 [2024-07-26 13:30:36.115528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:55.806 [2024-07-26 13:30:36.204024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:55.806 [2024-07-26 13:30:36.204119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:55.806 [2024-07-26 13:30:36.204123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.806 [2024-07-26 13:30:36.225457] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:30:55.806 [2024-07-26 13:30:36.233481] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:55.806 [2024-07-26 13:30:36.241505] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:56.065 [2024-07-26 13:30:36.340542] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:30:58.603 [2024-07-26 13:30:38.509607] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:30:58.603 [2024-07-26 13:30:38.509683] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:30:58.603 [2024-07-26 13:30:38.509697] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:58.603 [2024-07-26 13:30:38.517629] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:30:58.603 [2024-07-26 13:30:38.517647] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:30:58.603 [2024-07-26 13:30:38.517658] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:58.603 [2024-07-26 13:30:38.525651] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:30:58.603 [2024-07-26 13:30:38.525667] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:30:58.603 [2024-07-26 13:30:38.525677] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:58.603 [2024-07-26 13:30:38.533672] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:30:58.604 [2024-07-26 13:30:38.533688] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:30:58.604 [2024-07-26 13:30:38.533699] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:30:58.604 13:30:38 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:58.604 13:30:38 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:30:58.604 13:30:38 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:30:58.604 I/O targets: 00:30:58.604 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:30:58.604 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:30:58.604 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:30:58.604 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:30:58.604 00:30:58.604 00:30:58.604 CUnit - A unit testing framework for C - Version 2.1-3 00:30:58.604 http://cunit.sourceforge.net/ 00:30:58.604 00:30:58.604 00:30:58.604 Suite: bdevio tests on: crypto_ram4 00:30:58.604 Test: blockdev write read block ...passed 00:30:58.604 Test: blockdev write zeroes read block ...passed 00:30:58.604 Test: blockdev write zeroes read no split ...passed 00:30:58.604 Test: blockdev write zeroes read split ...passed 00:30:58.604 Test: blockdev write zeroes read split partial ...passed 00:30:58.604 Test: blockdev reset ...passed 00:30:58.604 Test: blockdev write read 8 blocks ...passed 00:30:58.604 Test: blockdev write read size > 128k ...passed 00:30:58.604 Test: blockdev write read invalid size ...passed 00:30:58.604 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:58.604 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:58.604 Test: blockdev write read max offset ...passed 00:30:58.604 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:58.604 Test: blockdev writev readv 8 blocks ...passed 00:30:58.604 Test: blockdev writev readv 30 x 1block ...passed 00:30:58.604 Test: blockdev writev readv block ...passed 00:30:58.604 Test: blockdev writev readv size > 128k ...passed 00:30:58.604 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:58.604 Test: blockdev comparev and writev ...passed 00:30:58.604 Test: blockdev nvme passthru rw ...passed 00:30:58.604 Test: blockdev nvme passthru vendor specific ...passed 00:30:58.604 Test: blockdev nvme admin passthru ...passed 00:30:58.604 Test: blockdev copy ...passed 00:30:58.604 Suite: bdevio tests on: crypto_ram3 00:30:58.604 Test: blockdev write read block ...passed 00:30:58.604 Test: blockdev write zeroes read block ...passed 00:30:58.604 Test: blockdev write zeroes read no split ...passed 00:30:58.604 Test: blockdev write zeroes read split ...passed 00:30:58.604 Test: blockdev write zeroes read split partial ...passed 00:30:58.604 Test: blockdev reset ...passed 00:30:58.604 Test: blockdev write read 8 blocks ...passed 00:30:58.604 Test: blockdev write read size > 128k ...passed 00:30:58.604 Test: blockdev write read invalid size ...passed 00:30:58.604 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:58.604 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:58.604 Test: blockdev write read max offset ...passed 00:30:58.604 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:58.604 Test: blockdev writev readv 8 blocks ...passed 00:30:58.604 Test: blockdev writev readv 30 x 1block ...passed 00:30:58.604 Test: blockdev writev readv block ...passed 00:30:58.604 Test: blockdev writev readv size > 128k ...passed 00:30:58.604 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:58.604 Test: blockdev comparev and writev ...passed 00:30:58.604 Test: blockdev nvme passthru rw ...passed 00:30:58.604 Test: blockdev nvme passthru vendor specific ...passed 00:30:58.604 Test: blockdev nvme admin passthru ...passed 00:30:58.604 Test: blockdev copy ...passed 00:30:58.604 Suite: bdevio tests on: crypto_ram2 00:30:58.604 Test: blockdev write read block ...passed 00:30:58.604 Test: blockdev write zeroes read block ...passed 00:30:58.604 Test: blockdev write zeroes read no split ...passed 00:30:58.604 Test: blockdev write zeroes read split ...passed 00:30:58.604 Test: blockdev write zeroes read split partial ...passed 00:30:58.604 Test: blockdev reset ...passed 00:30:58.604 Test: blockdev write read 8 blocks ...passed 00:30:58.604 Test: blockdev write read size > 128k ...passed 00:30:58.604 Test: blockdev write read invalid size ...passed 00:30:58.604 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:58.604 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:58.604 Test: blockdev write read max offset ...passed 00:30:58.604 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:58.604 Test: blockdev writev readv 8 blocks ...passed 00:30:58.604 Test: blockdev writev readv 30 x 1block ...passed 00:30:58.604 Test: blockdev writev readv block ...passed 00:30:58.604 Test: blockdev writev readv size > 128k ...passed 00:30:58.604 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:58.604 Test: blockdev comparev and writev ...passed 00:30:58.604 Test: blockdev nvme passthru rw ...passed 00:30:58.604 Test: blockdev nvme passthru vendor specific ...passed 00:30:58.604 Test: blockdev nvme admin passthru ...passed 00:30:58.604 Test: blockdev copy ...passed 00:30:58.604 Suite: bdevio tests on: crypto_ram 00:30:58.604 Test: blockdev write read block ...passed 00:30:58.604 Test: blockdev write zeroes read block ...passed 00:30:58.604 Test: blockdev write zeroes read no split ...passed 00:30:58.604 Test: blockdev write zeroes read split ...passed 00:30:58.604 Test: blockdev write zeroes read split partial ...passed 00:30:58.604 Test: blockdev reset ...passed 00:30:58.604 Test: blockdev write read 8 blocks ...passed 00:30:58.604 Test: blockdev write read size > 128k ...passed 00:30:58.604 Test: blockdev write read invalid size ...passed 00:30:58.604 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:30:58.604 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:30:58.604 Test: blockdev write read max offset ...passed 00:30:58.604 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:30:58.604 Test: blockdev writev readv 8 blocks ...passed 00:30:58.604 Test: blockdev writev readv 30 x 1block ...passed 00:30:58.604 Test: blockdev writev readv block ...passed 00:30:58.604 Test: blockdev writev readv size > 128k ...passed 00:30:58.604 Test: blockdev writev readv size > 128k in two iovs ...passed 00:30:58.604 Test: blockdev comparev and writev ...passed 00:30:58.604 Test: blockdev nvme passthru rw ...passed 00:30:58.604 Test: blockdev nvme passthru vendor specific ...passed 00:30:58.604 Test: blockdev nvme admin passthru ...passed 00:30:58.604 Test: blockdev copy ...passed 00:30:58.604 00:30:58.604 Run Summary: Type Total Ran Passed Failed Inactive 00:30:58.604 suites 4 4 n/a 0 0 00:30:58.604 tests 92 92 92 0 0 00:30:58.604 asserts 520 520 520 0 n/a 00:30:58.604 00:30:58.604 Elapsed time = 0.510 seconds 00:30:58.604 0 00:30:58.604 13:30:39 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 872831 00:30:58.604 13:30:39 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 872831 ']' 00:30:58.604 13:30:39 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 872831 00:30:58.604 13:30:39 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:30:58.604 13:30:39 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:58.604 13:30:39 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 872831 00:30:58.605 13:30:39 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:58.605 13:30:39 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:58.605 13:30:39 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 872831' 00:30:58.605 killing process with pid 872831 00:30:58.605 13:30:39 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@969 -- # kill 872831 00:30:58.605 13:30:39 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@974 -- # wait 872831 00:30:59.173 13:30:39 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:30:59.173 00:30:59.173 real 0m3.468s 00:30:59.173 user 0m9.704s 00:30:59.173 sys 0m0.535s 00:30:59.173 13:30:39 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:59.173 13:30:39 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:30:59.173 ************************************ 00:30:59.174 END TEST bdev_bounds 00:30:59.174 ************************************ 00:30:59.174 13:30:39 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:30:59.174 13:30:39 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:30:59.174 13:30:39 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:59.174 13:30:39 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:30:59.174 ************************************ 00:30:59.174 START TEST bdev_nbd 00:30:59.174 ************************************ 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=873403 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 873403 /var/tmp/spdk-nbd.sock 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 873403 ']' 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:30:59.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:59.174 13:30:39 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:30:59.174 [2024-07-26 13:30:39.538416] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:30:59.174 [2024-07-26 13:30:39.538478] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:59.174 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:59.174 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:59.174 [2024-07-26 13:30:39.672515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:59.434 [2024-07-26 13:30:39.756754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.434 [2024-07-26 13:30:39.778048] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:30:59.434 [2024-07-26 13:30:39.786069] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:30:59.434 [2024-07-26 13:30:39.794087] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:30:59.434 [2024-07-26 13:30:39.900984] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:01.971 [2024-07-26 13:30:42.078629] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:01.971 [2024-07-26 13:30:42.078696] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:01.971 [2024-07-26 13:30:42.078710] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:01.971 [2024-07-26 13:30:42.086649] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:01.971 [2024-07-26 13:30:42.086667] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:01.971 [2024-07-26 13:30:42.086679] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:01.971 [2024-07-26 13:30:42.094670] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:01.971 [2024-07-26 13:30:42.094686] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:01.971 [2024-07-26 13:30:42.094697] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:01.971 [2024-07-26 13:30:42.102690] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:01.971 [2024-07-26 13:30:42.102706] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:01.971 [2024-07-26 13:30:42.102716] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:01.971 1+0 records in 00:31:01.971 1+0 records out 00:31:01.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275827 s, 14.8 MB/s 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:01.971 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:02.232 1+0 records in 00:31:02.232 1+0 records out 00:31:02.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208926 s, 19.6 MB/s 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:02.232 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:31:02.491 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:31:02.491 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:31:02.491 13:30:42 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:31:02.491 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:31:02.491 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:02.491 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:02.491 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:02.491 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:31:02.491 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:02.491 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:02.492 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:02.492 13:30:42 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:02.492 1+0 records in 00:31:02.492 1+0 records out 00:31:02.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277166 s, 14.8 MB/s 00:31:02.492 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:02.492 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:02.492 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:02.492 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:02.492 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:02.492 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:02.492 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:02.492 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:02.751 1+0 records in 00:31:02.751 1+0 records out 00:31:02.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332901 s, 12.3 MB/s 00:31:02.751 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:03.010 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:03.010 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:03.010 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:03.010 13:30:43 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:03.010 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:03.010 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:31:03.010 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:03.010 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:03.010 { 00:31:03.010 "nbd_device": "/dev/nbd0", 00:31:03.010 "bdev_name": "crypto_ram" 00:31:03.010 }, 00:31:03.010 { 00:31:03.010 "nbd_device": "/dev/nbd1", 00:31:03.010 "bdev_name": "crypto_ram2" 00:31:03.010 }, 00:31:03.010 { 00:31:03.010 "nbd_device": "/dev/nbd2", 00:31:03.010 "bdev_name": "crypto_ram3" 00:31:03.010 }, 00:31:03.010 { 00:31:03.010 "nbd_device": "/dev/nbd3", 00:31:03.010 "bdev_name": "crypto_ram4" 00:31:03.010 } 00:31:03.010 ]' 00:31:03.010 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:03.010 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:03.010 { 00:31:03.010 "nbd_device": "/dev/nbd0", 00:31:03.010 "bdev_name": "crypto_ram" 00:31:03.010 }, 00:31:03.010 { 00:31:03.010 "nbd_device": "/dev/nbd1", 00:31:03.010 "bdev_name": "crypto_ram2" 00:31:03.010 }, 00:31:03.010 { 00:31:03.010 "nbd_device": "/dev/nbd2", 00:31:03.010 "bdev_name": "crypto_ram3" 00:31:03.010 }, 00:31:03.010 { 00:31:03.010 "nbd_device": "/dev/nbd3", 00:31:03.010 "bdev_name": "crypto_ram4" 00:31:03.010 } 00:31:03.010 ]' 00:31:03.010 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:03.269 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:31:03.269 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:03.269 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:31:03.269 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:03.269 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:03.269 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:03.269 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:03.269 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:03.528 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:03.528 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:03.528 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:03.528 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:03.528 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:03.528 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:03.528 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:03.528 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:03.528 13:30:43 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:03.528 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:03.528 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:03.528 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:03.528 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:03.528 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:03.528 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:03.528 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:03.528 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:03.528 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:03.528 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:31:03.787 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:31:03.787 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:31:03.787 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:31:03.787 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:03.787 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:03.787 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:31:03.787 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:03.787 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:03.787 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:03.787 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:31:04.046 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:31:04.046 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:31:04.046 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:31:04.046 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:04.046 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:04.046 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:31:04.046 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:04.046 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:04.046 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:04.046 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:04.046 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:04.305 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:04.305 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:04.305 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:04.305 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:04.305 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:04.305 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:04.306 13:30:44 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:31:04.565 /dev/nbd0 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:04.565 1+0 records in 00:31:04.565 1+0 records out 00:31:04.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027739 s, 14.8 MB/s 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:04.565 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:31:04.824 /dev/nbd1 00:31:04.824 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:04.824 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:04.824 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:04.824 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:04.824 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:04.824 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:04.824 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:04.824 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:04.824 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:04.824 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:04.824 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:04.824 1+0 records in 00:31:04.824 1+0 records out 00:31:04.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294111 s, 13.9 MB/s 00:31:04.824 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:31:05.084 /dev/nbd10 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:05.084 1+0 records in 00:31:05.084 1+0 records out 00:31:05.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300001 s, 13.7 MB/s 00:31:05.084 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:05.344 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:05.344 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:05.344 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:05.344 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:05.344 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:05.344 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:05.344 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:31:05.344 /dev/nbd11 00:31:05.344 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:05.604 1+0 records in 00:31:05.604 1+0 records out 00:31:05.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235564 s, 17.4 MB/s 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:05.604 13:30:45 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:05.604 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:05.604 { 00:31:05.604 "nbd_device": "/dev/nbd0", 00:31:05.604 "bdev_name": "crypto_ram" 00:31:05.604 }, 00:31:05.604 { 00:31:05.604 "nbd_device": "/dev/nbd1", 00:31:05.604 "bdev_name": "crypto_ram2" 00:31:05.604 }, 00:31:05.604 { 00:31:05.604 "nbd_device": "/dev/nbd10", 00:31:05.604 "bdev_name": "crypto_ram3" 00:31:05.604 }, 00:31:05.604 { 00:31:05.604 "nbd_device": "/dev/nbd11", 00:31:05.604 "bdev_name": "crypto_ram4" 00:31:05.604 } 00:31:05.604 ]' 00:31:05.604 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:05.604 { 00:31:05.604 "nbd_device": "/dev/nbd0", 00:31:05.604 "bdev_name": "crypto_ram" 00:31:05.604 }, 00:31:05.604 { 00:31:05.604 "nbd_device": "/dev/nbd1", 00:31:05.604 "bdev_name": "crypto_ram2" 00:31:05.604 }, 00:31:05.604 { 00:31:05.604 "nbd_device": "/dev/nbd10", 00:31:05.604 "bdev_name": "crypto_ram3" 00:31:05.604 }, 00:31:05.604 { 00:31:05.604 "nbd_device": "/dev/nbd11", 00:31:05.604 "bdev_name": "crypto_ram4" 00:31:05.604 } 00:31:05.604 ]' 00:31:05.604 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:31:05.868 /dev/nbd1 00:31:05.868 /dev/nbd10 00:31:05.868 /dev/nbd11' 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:31:05.868 /dev/nbd1 00:31:05.868 /dev/nbd10 00:31:05.868 /dev/nbd11' 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:05.868 256+0 records in 00:31:05.868 256+0 records out 00:31:05.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108314 s, 96.8 MB/s 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:05.868 256+0 records in 00:31:05.868 256+0 records out 00:31:05.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0375648 s, 27.9 MB/s 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:05.868 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:31:05.868 256+0 records in 00:31:05.868 256+0 records out 00:31:05.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0476007 s, 22.0 MB/s 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:31:05.869 256+0 records in 00:31:05.869 256+0 records out 00:31:05.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0364764 s, 28.7 MB/s 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:31:05.869 256+0 records in 00:31:05.869 256+0 records out 00:31:05.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0532218 s, 19.7 MB/s 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:05.869 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:06.171 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:31:06.431 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:06.431 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:06.431 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:06.431 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:06.431 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:06.431 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:06.431 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:06.431 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:06.431 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:06.431 13:30:46 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:31:06.689 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:31:06.689 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:31:06.690 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:31:06.690 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:06.690 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:06.690 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:31:06.690 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:06.690 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:06.690 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:06.690 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:31:06.949 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:31:06.949 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:31:06.949 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:31:06.949 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:06.949 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:06.949 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:31:06.949 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:06.949 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:06.949 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:06.949 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:06.949 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:07.208 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:07.467 malloc_lvol_verify 00:31:07.467 13:30:47 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:07.727 58104fcf-66a9-4e24-8d74-43ce3ecd9178 00:31:07.727 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:07.986 6cd86b3f-993b-4430-9b34-f9b5e0fb0e0e 00:31:07.986 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:08.245 /dev/nbd0 00:31:08.245 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:08.245 mke2fs 1.46.5 (30-Dec-2021) 00:31:08.245 Discarding device blocks: 0/4096 done 00:31:08.245 Creating filesystem with 4096 1k blocks and 1024 inodes 00:31:08.245 00:31:08.245 Allocating group tables: 0/1 done 00:31:08.245 Writing inode tables: 0/1 done 00:31:08.245 Creating journal (1024 blocks): done 00:31:08.245 Writing superblocks and filesystem accounting information: 0/1 done 00:31:08.245 00:31:08.245 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:08.245 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:08.245 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:08.245 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:08.245 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:08.245 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:08.245 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:08.245 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 873403 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 873403 ']' 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 873403 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 873403 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 873403' 00:31:08.505 killing process with pid 873403 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@969 -- # kill 873403 00:31:08.505 13:30:48 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@974 -- # wait 873403 00:31:08.765 13:30:49 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:31:08.765 00:31:08.765 real 0m9.770s 00:31:08.765 user 0m12.686s 00:31:08.765 sys 0m3.812s 00:31:08.765 13:30:49 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:08.765 13:30:49 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:08.765 ************************************ 00:31:08.765 END TEST bdev_nbd 00:31:08.765 ************************************ 00:31:08.765 13:30:49 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:31:08.765 13:30:49 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = nvme ']' 00:31:08.765 13:30:49 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = gpt ']' 00:31:08.765 13:30:49 blockdev_crypto_aesni -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:31:08.765 13:30:49 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:08.765 13:30:49 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:08.765 13:30:49 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:09.025 ************************************ 00:31:09.025 START TEST bdev_fio 00:31:09.025 ************************************ 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:31:09.025 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram4]' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram4 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:09.025 ************************************ 00:31:09.025 START TEST bdev_fio_rw_verify 00:31:09.025 ************************************ 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:09.025 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:09.026 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:09.026 13:30:49 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:09.617 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:09.617 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:09.617 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:09.617 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:09.617 fio-3.35 00:31:09.617 Starting 4 threads 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.617 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:09.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.618 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:09.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.618 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:09.618 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.618 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:24.487 00:31:24.487 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=875931: Fri Jul 26 13:31:02 2024 00:31:24.487 read: IOPS=20.8k, BW=81.3MiB/s (85.2MB/s)(813MiB/10001msec) 00:31:24.487 slat (usec): min=15, max=1472, avg=64.18, stdev=50.46 00:31:24.487 clat (usec): min=11, max=2401, avg=347.36, stdev=296.76 00:31:24.487 lat (usec): min=48, max=2483, avg=411.55, stdev=332.86 00:31:24.487 clat percentiles (usec): 00:31:24.487 | 50.000th=[ 260], 99.000th=[ 1434], 99.900th=[ 1663], 99.990th=[ 1811], 00:31:24.487 | 99.999th=[ 2114] 00:31:24.487 write: IOPS=22.8k, BW=89.0MiB/s (93.4MB/s)(871MiB/9783msec); 0 zone resets 00:31:24.487 slat (usec): min=21, max=438, avg=76.99, stdev=52.03 00:31:24.487 clat (usec): min=32, max=2912, avg=415.82, stdev=344.96 00:31:24.487 lat (usec): min=59, max=3237, avg=492.81, stdev=383.00 00:31:24.487 clat percentiles (usec): 00:31:24.487 | 50.000th=[ 326], 99.000th=[ 1795], 99.900th=[ 1958], 99.990th=[ 2073], 00:31:24.487 | 99.999th=[ 2311] 00:31:24.487 bw ( KiB/s): min=70312, max=121424, per=97.81%, avg=89171.37, stdev=2996.33, samples=76 00:31:24.487 iops : min=17578, max=30356, avg=22292.84, stdev=749.08, samples=76 00:31:24.487 lat (usec) : 20=0.01%, 50=0.01%, 100=7.80%, 250=33.46%, 500=39.01% 00:31:24.487 lat (usec) : 750=7.64%, 1000=5.02% 00:31:24.487 lat (msec) : 2=7.04%, 4=0.03% 00:31:24.487 cpu : usr=99.55%, sys=0.00%, ctx=59, majf=0, minf=236 00:31:24.487 IO depths : 1=10.8%, 2=25.4%, 4=50.8%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.487 complete : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.487 issued rwts: total=208026,222980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.487 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:24.487 00:31:24.487 Run status group 0 (all jobs): 00:31:24.487 READ: bw=81.3MiB/s (85.2MB/s), 81.3MiB/s-81.3MiB/s (85.2MB/s-85.2MB/s), io=813MiB (852MB), run=10001-10001msec 00:31:24.487 WRITE: bw=89.0MiB/s (93.4MB/s), 89.0MiB/s-89.0MiB/s (93.4MB/s-93.4MB/s), io=871MiB (913MB), run=9783-9783msec 00:31:24.487 00:31:24.487 real 0m13.469s 00:31:24.487 user 0m54.018s 00:31:24.487 sys 0m0.480s 00:31:24.487 13:31:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:24.487 13:31:02 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:31:24.487 ************************************ 00:31:24.487 END TEST bdev_fio_rw_verify 00:31:24.487 ************************************ 00:31:24.487 13:31:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:31:24.487 13:31:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:24.487 13:31:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:24.488 13:31:02 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "591a6185-10b6-5df2-9399-f0b159f22e75"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "591a6185-10b6-5df2-9399-f0b159f22e75",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "5927bd99-75c6-5f26-b740-583792bae2f9"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5927bd99-75c6-5f26-b740-583792bae2f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "d51e6e74-a821-53a7-8cea-73408fb34d0f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "d51e6e74-a821-53a7-8cea-73408fb34d0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "4a65d890-317d-5b23-9ae9-54f6eb51b5c4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "4a65d890-317d-5b23-9ae9-54f6eb51b5c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:31:24.488 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:31:24.488 crypto_ram2 00:31:24.488 crypto_ram3 00:31:24.488 crypto_ram4 ]] 00:31:24.488 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:24.488 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "591a6185-10b6-5df2-9399-f0b159f22e75"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "591a6185-10b6-5df2-9399-f0b159f22e75",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "5927bd99-75c6-5f26-b740-583792bae2f9"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "5927bd99-75c6-5f26-b740-583792bae2f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "d51e6e74-a821-53a7-8cea-73408fb34d0f"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "d51e6e74-a821-53a7-8cea-73408fb34d0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "4a65d890-317d-5b23-9ae9-54f6eb51b5c4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "4a65d890-317d-5b23-9ae9-54f6eb51b5c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:31:24.488 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:24.488 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:31:24.488 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:31:24.488 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:24.488 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:31:24.488 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:31:24.488 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:24.488 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram4]' 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram4 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:24.489 ************************************ 00:31:24.489 START TEST bdev_fio_trim 00:31:24.489 ************************************ 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:24.489 13:31:03 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:31:24.489 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:24.489 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:24.489 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:24.489 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:24.489 fio-3.35 00:31:24.489 Starting 4 threads 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:24.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:24.489 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:36.702 00:31:36.702 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=878342: Fri Jul 26 13:31:16 2024 00:31:36.702 write: IOPS=38.7k, BW=151MiB/s (159MB/s)(1513MiB/10001msec); 0 zone resets 00:31:36.702 slat (usec): min=11, max=506, avg=58.89, stdev=34.43 00:31:36.702 clat (usec): min=32, max=2175, avg=262.39, stdev=169.65 00:31:36.702 lat (usec): min=47, max=2266, avg=321.28, stdev=190.90 00:31:36.702 clat percentiles (usec): 00:31:36.702 | 50.000th=[ 219], 99.000th=[ 807], 99.900th=[ 938], 99.990th=[ 1057], 00:31:36.702 | 99.999th=[ 1598] 00:31:36.702 bw ( KiB/s): min=139024, max=204872, per=100.00%, avg=155090.11, stdev=4484.71, samples=76 00:31:36.702 iops : min=34756, max=51218, avg=38772.53, stdev=1121.18, samples=76 00:31:36.702 trim: IOPS=38.7k, BW=151MiB/s (159MB/s)(1513MiB/10001msec); 0 zone resets 00:31:36.702 slat (usec): min=3, max=1346, avg=16.23, stdev= 6.64 00:31:36.702 clat (usec): min=25, max=1876, avg=247.62, stdev=110.74 00:31:36.702 lat (usec): min=32, max=1894, avg=263.85, stdev=112.48 00:31:36.702 clat percentiles (usec): 00:31:36.702 | 50.000th=[ 231], 99.000th=[ 562], 99.900th=[ 668], 99.990th=[ 766], 00:31:36.702 | 99.999th=[ 1057] 00:31:36.702 bw ( KiB/s): min=139016, max=204904, per=100.00%, avg=155091.37, stdev=4485.85, samples=76 00:31:36.702 iops : min=34752, max=51226, avg=38772.74, stdev=1121.48, samples=76 00:31:36.702 lat (usec) : 50=0.53%, 100=7.79%, 250=50.01%, 500=34.98%, 750=5.69% 00:31:36.702 lat (usec) : 1000=0.99% 00:31:36.702 lat (msec) : 2=0.01%, 4=0.01% 00:31:36.702 cpu : usr=99.64%, sys=0.00%, ctx=95, majf=0, minf=106 00:31:36.702 IO depths : 1=8.1%, 2=26.3%, 4=52.5%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:36.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.702 complete : 0=0.0%, 4=88.4%, 8=11.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:36.702 issued rwts: total=0,387429,387431,0 short=0,0,0,0 dropped=0,0,0,0 00:31:36.702 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:36.702 00:31:36.702 Run status group 0 (all jobs): 00:31:36.702 WRITE: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=1513MiB (1587MB), run=10001-10001msec 00:31:36.702 TRIM: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=1513MiB (1587MB), run=10001-10001msec 00:31:36.702 00:31:36.702 real 0m13.472s 00:31:36.702 user 0m54.797s 00:31:36.702 sys 0m0.478s 00:31:36.702 13:31:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:36.702 13:31:16 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:31:36.702 ************************************ 00:31:36.702 END TEST bdev_fio_trim 00:31:36.702 ************************************ 00:31:36.702 13:31:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:31:36.702 13:31:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:31:36.702 13:31:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:31:36.702 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:31:36.702 13:31:16 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:31:36.702 00:31:36.702 real 0m27.296s 00:31:36.702 user 1m48.998s 00:31:36.702 sys 0m1.155s 00:31:36.702 13:31:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:36.702 13:31:16 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:31:36.702 ************************************ 00:31:36.702 END TEST bdev_fio 00:31:36.702 ************************************ 00:31:36.702 13:31:16 blockdev_crypto_aesni -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:36.702 13:31:16 blockdev_crypto_aesni -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:36.702 13:31:16 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:31:36.702 13:31:16 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:36.702 13:31:16 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:36.702 ************************************ 00:31:36.702 START TEST bdev_verify 00:31:36.702 ************************************ 00:31:36.702 13:31:16 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:36.702 [2024-07-26 13:31:16.766236] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:31:36.702 [2024-07-26 13:31:16.766295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid880106 ] 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:36.702 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:36.702 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:36.702 [2024-07-26 13:31:16.898308] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:36.702 [2024-07-26 13:31:16.983598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.702 [2024-07-26 13:31:16.983603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.702 [2024-07-26 13:31:17.004932] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:36.702 [2024-07-26 13:31:17.012961] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:36.702 [2024-07-26 13:31:17.020982] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:36.702 [2024-07-26 13:31:17.133244] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:39.237 [2024-07-26 13:31:19.302476] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:39.237 [2024-07-26 13:31:19.302544] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:39.237 [2024-07-26 13:31:19.302557] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:39.237 [2024-07-26 13:31:19.310490] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:39.237 [2024-07-26 13:31:19.310507] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:39.237 [2024-07-26 13:31:19.310518] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:39.237 [2024-07-26 13:31:19.318512] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:39.237 [2024-07-26 13:31:19.318528] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:39.237 [2024-07-26 13:31:19.318539] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:39.237 [2024-07-26 13:31:19.326532] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:39.237 [2024-07-26 13:31:19.326548] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:39.237 [2024-07-26 13:31:19.326558] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:39.237 Running I/O for 5 seconds... 00:31:44.552 00:31:44.552 Latency(us) 00:31:44.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.552 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:44.552 Verification LBA range: start 0x0 length 0x1000 00:31:44.552 crypto_ram : 5.07 511.02 2.00 0.00 0.00 249206.17 1402.47 159383.55 00:31:44.552 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:44.552 Verification LBA range: start 0x1000 length 0x1000 00:31:44.552 crypto_ram : 5.07 512.28 2.00 0.00 0.00 248422.49 1848.12 159383.55 00:31:44.552 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:44.553 Verification LBA range: start 0x0 length 0x1000 00:31:44.553 crypto_ram2 : 5.07 512.48 2.00 0.00 0.00 247636.84 1808.79 146800.64 00:31:44.553 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:44.553 Verification LBA range: start 0x1000 length 0x1000 00:31:44.553 crypto_ram2 : 5.08 515.27 2.01 0.00 0.00 246311.22 2188.90 145961.78 00:31:44.553 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:44.553 Verification LBA range: start 0x0 length 0x1000 00:31:44.553 crypto_ram3 : 5.06 4019.42 15.70 0.00 0.00 31498.00 3014.66 28521.27 00:31:44.553 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:44.553 Verification LBA range: start 0x1000 length 0x1000 00:31:44.553 crypto_ram3 : 5.06 4022.74 15.71 0.00 0.00 31452.06 7444.89 28521.27 00:31:44.553 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:44.553 Verification LBA range: start 0x0 length 0x1000 00:31:44.553 crypto_ram4 : 5.06 4020.01 15.70 0.00 0.00 31392.36 3171.94 27682.41 00:31:44.553 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:44.553 Verification LBA range: start 0x1000 length 0x1000 00:31:44.553 crypto_ram4 : 5.07 4040.82 15.78 0.00 0.00 31229.78 2057.83 27472.69 00:31:44.553 =================================================================================================================== 00:31:44.553 Total : 18154.03 70.91 0.00 0.00 55895.60 1402.47 159383.55 00:31:44.553 00:31:44.553 real 0m8.134s 00:31:44.553 user 0m15.494s 00:31:44.553 sys 0m0.317s 00:31:44.553 13:31:24 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:44.553 13:31:24 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:31:44.553 ************************************ 00:31:44.553 END TEST bdev_verify 00:31:44.553 ************************************ 00:31:44.553 13:31:24 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:44.553 13:31:24 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:31:44.553 13:31:24 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:44.553 13:31:24 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:44.553 ************************************ 00:31:44.553 START TEST bdev_verify_big_io 00:31:44.553 ************************************ 00:31:44.553 13:31:24 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:44.553 [2024-07-26 13:31:24.974656] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:31:44.553 [2024-07-26 13:31:24.974712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid881470 ] 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:44.553 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:44.553 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:44.813 [2024-07-26 13:31:25.106530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:44.813 [2024-07-26 13:31:25.190544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.813 [2024-07-26 13:31:25.190549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.813 [2024-07-26 13:31:25.211868] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:44.813 [2024-07-26 13:31:25.219895] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:44.813 [2024-07-26 13:31:25.227917] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:44.813 [2024-07-26 13:31:25.322356] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:47.349 [2024-07-26 13:31:27.489658] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:47.349 [2024-07-26 13:31:27.489726] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:47.349 [2024-07-26 13:31:27.489740] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:47.349 [2024-07-26 13:31:27.497672] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:47.349 [2024-07-26 13:31:27.497689] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:47.349 [2024-07-26 13:31:27.497707] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:47.349 [2024-07-26 13:31:27.505694] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:47.349 [2024-07-26 13:31:27.505710] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:47.349 [2024-07-26 13:31:27.505720] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:47.349 [2024-07-26 13:31:27.513718] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:47.349 [2024-07-26 13:31:27.513734] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:47.349 [2024-07-26 13:31:27.513744] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:47.349 Running I/O for 5 seconds... 00:31:49.885 [2024-07-26 13:31:30.085711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.086112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.087201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.088477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.090615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.091905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.093428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.094945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.095719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.096086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.097563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.099185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.101730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.103270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.104778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.106155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.106922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.108000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.109279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.110798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.113057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.114590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.116122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.116752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.117600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.119146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.120838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.122448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.125314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.126856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.128281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.128651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.130070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.131359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.132884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.134407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.137038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.138579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.139237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.139593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.141717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.143332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.145009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.146559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.149219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.150610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.150968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.151331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.885 [2024-07-26 13:31:30.152911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.154259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.155782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.156965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.159519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.160543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.160901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.161277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.162868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.164389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.165920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.166326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.169037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.169413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.169768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.170770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.172666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.174213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.175122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.176534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.178408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.178779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.179134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.180722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.182551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.184168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.184719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.185997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.187339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.187705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.188632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.189880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.191731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.192804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.194064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.195325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.196664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.197047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.198668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.200096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.201954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.202385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.203757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.205299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.206808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.207664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.208932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.210477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.211888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.213208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.214483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.216010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.217588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.219234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.220702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.222262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.222955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.224353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.225908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.227439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.229409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.230678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.232084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.233267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.234866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.236284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.237325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.237682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.240038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.240958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.242374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.243306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.244164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.245369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.246773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.247410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.248807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.249181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.249240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.250804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.251597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.253136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.253193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.886 [2024-07-26 13:31:30.254884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.256306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.257820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.257866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.259218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.259638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.261042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.261090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.262096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.263371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.264530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.264576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.265096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.265472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.266928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.266984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.267352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.268565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.270034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.270081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.271021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.271433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.272508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.272555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.272910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.274202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.274854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.274899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.276556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.276927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.277303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.277349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.277703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.279193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.279247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.280406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.280450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.281127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.281186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.281543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.281581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.283693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.283748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.284911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.284955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.285737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.285793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.286155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.286198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.288939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.288996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.290659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.290709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.291518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.291568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.292611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.292652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.294932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.294985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.296016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.296060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.296899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.296949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.298595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.298644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.300954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.301007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.301374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.301418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.302401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.302453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.303613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.303657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.306088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.306149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.306506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.306544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.308447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.308498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.309833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.309876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.311912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.311967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.887 [2024-07-26 13:31:30.312344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.312384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.314368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.314428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.316105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.316159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.317603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.317654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.318009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.318049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.318789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.318843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.319206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.319248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.320873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.320925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.321286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.321326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.322042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.322101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.322467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.322506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.324151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.324205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.324564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.324602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.324619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.324935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.325403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.325460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.325814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.325853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.325872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.326231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.327196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.327564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.327605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.327960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.328246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.328387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.328750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.328795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.329159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.329507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.330452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.330502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.330541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.330580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.330953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.331087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.331148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.331188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.331227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.331568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.332529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.332585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.332624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.332662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.332972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.333108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.333156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.333195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.333233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.333512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.334559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.334607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.334647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.334686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.335150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.335288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.335343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.335382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.335435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.335800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.336974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.337033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.337072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.337110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.337370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.337510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.337553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.337592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.888 [2024-07-26 13:31:30.337630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.337920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.338810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.338863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.338901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.338938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.339333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.339476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.339526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.339575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.339616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.339982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.341295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.341353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.341406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.341457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.341858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.342004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.342059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.342099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.342137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.342487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.343337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.343384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.343423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.343461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.343791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.343933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.343985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.344026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.344065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.344406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.345457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.345516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.345575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.345630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.345979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.346119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.346168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.346208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.346246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.346667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.347537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.347584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.347622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.347660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.348002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.348151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.348200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.348238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.348276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.348514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.349441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.349489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.349528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.349579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.350000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.350150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.350193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.350237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.350274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.350513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.351445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.351492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.351538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.351575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.351815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.351953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.351994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.352032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.352071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.352457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.353533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.353582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.353620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.353658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.353897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.354035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.354076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.354114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.354159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:49.889 [2024-07-26 13:31:30.354511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.150 [2024-07-26 13:31:30.495708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.150 [2024-07-26 13:31:30.497012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.150 [2024-07-26 13:31:30.498539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.150 [2024-07-26 13:31:30.499882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.150 [2024-07-26 13:31:30.502422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.150 [2024-07-26 13:31:30.503725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.150 [2024-07-26 13:31:30.504081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.504453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.506045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.507351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.508872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.509521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.512077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.512541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.512897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.513638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.515425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.516963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.518398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.519526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.521962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.522334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.522690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.524223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.525911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.527450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.528088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.529505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.531155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.531520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.532281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.533540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.535445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.536888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.538022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.539278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.540642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.541008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.542475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.543792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.545711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.546371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.547841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.549474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.550895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.551631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.552913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.554212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.555968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.557182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.558461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.559759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.562054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.563348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.564659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.566191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.568009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.569203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.570483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.572004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.575494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.577070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.577123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.578735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.580475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.581815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.581862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.583264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.584459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.585639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.585685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.586698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.587117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.588285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.588336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.588988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.590396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.591727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.591774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.592137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.592511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.594223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.594275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.594630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.595813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.597145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.597187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.598206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.151 [2024-07-26 13:31:30.598611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.599626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.599671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.600025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.601256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.601917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.601967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.603676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.604045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.604421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.604465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.604819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.605846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.606837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.606882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.607814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.608188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.608558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.608611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.608966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.610077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.611771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.611821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.613174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.613615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.613979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.614020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.614962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.616149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.617160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.617206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.618527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.618998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.619370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.619411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.621062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.622164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.623132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.623183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.623829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.624339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.624889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.624931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.626026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.627162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.628786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.628849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.629216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.629666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.630933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.630977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.631904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.632995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.634036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.634083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.634442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.634949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.636386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.636440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.638048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.639190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.639590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.639635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.639989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.640362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.641309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.641355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.642497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.643579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.643946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.643986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.644365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.644734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.645908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.645955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.646459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.647570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.647937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.647981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.648810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.649254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.650643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.650690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.651921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.152 [2024-07-26 13:31:30.653106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.653481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.653522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.654738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.655154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.655947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.655988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.657200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.658348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.658714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.658754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.659111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.659510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.659876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.659931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.660291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.661546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.661913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.661954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.662320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.662734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.663098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.663154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.663508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.664791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.665167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.665209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.665566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.665997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.666367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.666410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.666777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.668053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.668428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.668470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.668826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.669240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.669605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.669647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.669999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.671303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.671670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.671711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.672068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.672500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.672862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.672904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.673266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.674567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.674933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.674975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.153 [2024-07-26 13:31:30.675339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.675813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.676184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.676227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.676585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.677909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.678279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.678322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.678678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.679115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.679483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.679524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.679877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.681245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.681609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.681650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.682007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.682028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.682330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.682468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.682827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.682867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.683229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.683252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.683582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.684969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.685021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.686036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.686078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.686356] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:31:50.415 [2024-07-26 13:31:30.687547] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:31:50.415 [2024-07-26 13:31:30.687604] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:31:50.415 [2024-07-26 13:31:30.689084] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:31:50.415 [2024-07-26 13:31:30.689134] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:31:50.415 [2024-07-26 13:31:30.690516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.690564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.690603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.690640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.690893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.691029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.691071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.691108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.691153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.692216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.692263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.692304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.692348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.692589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.692729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.692784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.692825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.692863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.694035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.694082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.694120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.694164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.694452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.694593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.694634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.694681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.694730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.695876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.695934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.695973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.696013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.696381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.696516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.696569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.415 [2024-07-26 13:31:30.696608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.696647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.697796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.697854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.697904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.697943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.698233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.698371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.698415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.698453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.698491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.699703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.699753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.699791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.699829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.700080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.700225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.700268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.700305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.700343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.701408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.701455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.701501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.701539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.701779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.701917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.701966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.702005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.702048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.703205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.703251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.703289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.703326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.703565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.703704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.703746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.703787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.703833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.704990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.705920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.705965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.706668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.706911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.707050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.707457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.707499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.708713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.709809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.711215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.711262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.711779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.712158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.712298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.712659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.712703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.714100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.715238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.716712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.716760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.717115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.717492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.717625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.718794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.718840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.720228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.721400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.722062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.722104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.723342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.723712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.723848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.724749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.724790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.725480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.726618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.728031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.728077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.729166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.729409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.729547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.729915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.416 [2024-07-26 13:31:30.729958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.731304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.732492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.732970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.733021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.734722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.734963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.735108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.736080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.736123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.736801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.737830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.739233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.739279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.740183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.740461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.740599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.742024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.742070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.742728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.744056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.744433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.744486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.745974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.746219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.746358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.746829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.746872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.748278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.749335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.750474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.750518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.751013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.751326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.751464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.752881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.752938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.753487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.754583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.756030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.756073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.756890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.757134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.757273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.757766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.757810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.759319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.762383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.763021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.763063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.764655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.765064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.765208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.766927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.766969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.768505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.769588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.770660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.770703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.771884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.772196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.772334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.773603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.773649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.775220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.776385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.777602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.777646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.778066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.778317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.778455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.778816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.778861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.780336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.783498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.785012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.785057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.786582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.786822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.786957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.788536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.788585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.789061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.417 [2024-07-26 13:31:30.791729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.793270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.793314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.794119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.794401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.794536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.795841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.795885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.797390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.801307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.802977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.803027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.804702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.804946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.805080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.806731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.806778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.807871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.811976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.812380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.812427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.813918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.814331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.814466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.815886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.815931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.817347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.821408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.822957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.823001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.824267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.824510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.824646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.825025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.825067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.826395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.830420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.831045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.831087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.832362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.832602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.832736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.834278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.834322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.835357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.837385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.838821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.838869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.840360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.840604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.840737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.841597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.841641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.842911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.845922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.847418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.847463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.848973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.849281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.849420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.850980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.851032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.851995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.856204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.856571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.856611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.858179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.858496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.858629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.859943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.859986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.861574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.865492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.866544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.866588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.866625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.866986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.867119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.868782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.868825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.868863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.872886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.873527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.418 [2024-07-26 13:31:30.873665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.873794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.875392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.879222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.880698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.881061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.882546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.882849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.882899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.884206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.885736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.886529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.891884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.892503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.893711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.894315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.894563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.896152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.897747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.899250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.900305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.905061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.906515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.906889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.908328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.908621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.910021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.911554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.912246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.913730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.915624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.916850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.917696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.918674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.918954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.920317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.921626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.923154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.924421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.926966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.928166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.929770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.930136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.930384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.930839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.932529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.934084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.419 [2024-07-26 13:31:30.935520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.940188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.941091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.942005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.943166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.943480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.944584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.945876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.947184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.948715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.953549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.954518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.955639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.956347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.956598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.957635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.958921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.960084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.961073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.966373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.967365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.968182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.969791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.970107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.970991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.972449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.973066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.974274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.978964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.979715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.981017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.981062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.981373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.982429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.983251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.984237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.984282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.988446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.988500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.989673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.989716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.990090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.991384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.991435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.992365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.992409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.996406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.996463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.997926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.997967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.998301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.681 [2024-07-26 13:31:30.999549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:30.999599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.000161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.000207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.005352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.005407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.006159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.006202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.006452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.006975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.007024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.008310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.008354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.013169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.013230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.013738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.013780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.014025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.014484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.014540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.015853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.015900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.019073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.019128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.019914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.019957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.020241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.021541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.021592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.022784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.022830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.025807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.025869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.027463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.027513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.027762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.028274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.028327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.029717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.029775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.032892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.032946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.034110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.034162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.034412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.035493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.035545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.036711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.036756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.043227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.043283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.044682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.044728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.045114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.046728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.046787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.048317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.048359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.052074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.052129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.053172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.053212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.053495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.054783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.054835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.055569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.055611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.058831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.058886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.059257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.059301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.059549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.061288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.061340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.062612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.062653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.066828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.066883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.067710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.067755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.068013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.068473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.068527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.069971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.070012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.071845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.071901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.073258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.073309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.073562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.074406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.074458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.682 [2024-07-26 13:31:31.075291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.075332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.079093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.079163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.080185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.080227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.080577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.082008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.082059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.082430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.082470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.085254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.085308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.085667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.085711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.086051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.087500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.087552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.087912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.087959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.090469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.090523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.092150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.092199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.092602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.093055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.093106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.093475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.093534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.095710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.095765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.097253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.097299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.097712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.099406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.099458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.099814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.099873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.102053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.102136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.102498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.102541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.102910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.104628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.104685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.105199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.105242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.107480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.107548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.109108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.109156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.109514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.109964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.110022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.110390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.110436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.112625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.112680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.114359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.114407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.114813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.116367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.116418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.116775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.116825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.119025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.119087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.119454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.119507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.119857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.121613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.121665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.122298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.122340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.124499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.124555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.125964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.126004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.126320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.126775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.126836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.127354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.127399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.130048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.130107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.131432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.131476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.131795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.132944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.132994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.134226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.134269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.683 [2024-07-26 13:31:31.138910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.140269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.141678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.141722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.142175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.143925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.144297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.145782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.145829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.150325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.150379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.150419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.150457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.150706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.151595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.151650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.151689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.151727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.154850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.154899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.154936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.154974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.155336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.155474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.155515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.155553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.155591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.158718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.158767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.158805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.158843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.159230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.159370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.159414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.159456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.159493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.163287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.163336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.163374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.163412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.163834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.163969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.164024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.164062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.164114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.166531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.166581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.166619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.166661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.166915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.167050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.167091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.167129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.167187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.170922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.170979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.171031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.171069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.171325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.171482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.171530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.171569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.171607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.173792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.173849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.173888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.173946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.174201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.174335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.174404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.174454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.174492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.177473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.177523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.177574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.177617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.177861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.177994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.178036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.178078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.178116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.180831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.180884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.180921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.182127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.182381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.182519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.182561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.182607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.183124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.185584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.684 [2024-07-26 13:31:31.187065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.187119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.188732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.189073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.189218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.190076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.190120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.191469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.195439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.196412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.196459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.197548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.197796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.197930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.198305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.198349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.199700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.203265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.204589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.685 [2024-07-26 13:31:31.204641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.205402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.205654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.205788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.206862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.206907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.207450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.210318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.211516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.211563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.213204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.213642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.213777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.215190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.215233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.215863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.219992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.220731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.220777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.221669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.221918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.222053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.222658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.222700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.224132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.227417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.228949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.228999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.945 [2024-07-26 13:31:31.229385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.229631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.229768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.230301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.230344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.231613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.235610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.237218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.237264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.238613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.238991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.239128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.240795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.240846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.241211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.244952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.246081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.246126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.246631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.246880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.247016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.247394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.247448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.248958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.252199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.253583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.253625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.254458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.254709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.254843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.255832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.255876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.257053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.260951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.262334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.262378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.263893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.264252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.264389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.265348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.265392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.266661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.270118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.271201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.271245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.272962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.273220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.273355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.274845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.274897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.276487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.279591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.280892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.280937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.282208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.282459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.282593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.283277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.283322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.284612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.287854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.288941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.288985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.289517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.289770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.289901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.291192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.291238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.292569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.295880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.297534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.297587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.298788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.299157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.299293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.300795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.300847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.301214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.305196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.306626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.306671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.308057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.308312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.308448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.946 [2024-07-26 13:31:31.309768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.309812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.311414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.315530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.316844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.316890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.318424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.318746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.318881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.320171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.320219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.321519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.324642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.325013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.325054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.326661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.326908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.327046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.328641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.328702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.330190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.334071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.335494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.335538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.335900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.336152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.336285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.336660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.336702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.337981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.341066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.342376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.342420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.343946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.344205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.344339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.345971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.346014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.346611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.349559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.351227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.351282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.352355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.352662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.352794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.354095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.354146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.355670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.360974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.362390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.362436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.363815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.364063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.364207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.365549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.365593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.367000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.370518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.370889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.370932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.372225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.372561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.372695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.373969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.374012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.375317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.379137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.379197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.379234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.380757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.381100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.381245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.381291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.381329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.382159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.385428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.386976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.388112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.389523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.389819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.947 [2024-07-26 13:31:31.389952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.391281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.392823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.393611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.397189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.398874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.400477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.401262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.401558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.402961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.404499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.405748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.407226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.412827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.414366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.414970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.416363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.416613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.418249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.419953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.420999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.421780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.426415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.427532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.428812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.430008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.430266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.431828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.432207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.433815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.434178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.438544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.440202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.440599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.442024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.442403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.444054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.445422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.445853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.447455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.450102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.451710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.453221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.453585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.453835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.455448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.456359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.457276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.458377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.462970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.464614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.465601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.466407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.466666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.467419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.468626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:50.948 [2024-07-26 13:31:31.469565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.470651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.475125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.475535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.476966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.477911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.478199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.479721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.480711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.481509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.483096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.488612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.490249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.491397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.491443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.491771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.493210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.493932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.495021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.495065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.499337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.499402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.500478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.500520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.500899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.502340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.502391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.502755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.210 [2024-07-26 13:31:31.502794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.507200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.507258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.508732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.508772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.509120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.510419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.510471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.511314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.511356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.516174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.516230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.516987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.517029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.517289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.517838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.517887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.519467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.519515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.523917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.523978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.524444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.524487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.524737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.525354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.525406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.526444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.526490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.530847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.530900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.532159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.532201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.532595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.534075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.534126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.535249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.535292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.539274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.539329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.540552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.540592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.540861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.542077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.542131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.543543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.543589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.547594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.547648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.548103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.548149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.548445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.549267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.549318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.550534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.550580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.556719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.556779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.558070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.558115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.558370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.559055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.559106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.560099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.560150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.564385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.564440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.565415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.565460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.565732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.566784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.566835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.568185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.568228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.570803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.570864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.571239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.571290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.211 [2024-07-26 13:31:31.571739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.573276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.573327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.574109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.574158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.576445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.576499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.577711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.577752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.578019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.578482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.578541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.579146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.579190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.581706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.581763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.583177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.583218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.583546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.584760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.584810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.585177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.585251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.587730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.587786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.588158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.588210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.588609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.589065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.589115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.589491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.589548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.591977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.592030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.592404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.592455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.592805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.593264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.593316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.593676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.593721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.596172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.596234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.596595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.596647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.597008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.597470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.597523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.597886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.597937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.600393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.600452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.600812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.600856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.601221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.601670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.601722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.602081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.602124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.606730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.606785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.607784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.607845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.608280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.608735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.608786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.610331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.610372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.613549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.613609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.613989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.614029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.614283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.615864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.615921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.616723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.616767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.621037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.621093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.622469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.622516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.622766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.623793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.623844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.624790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.624851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.628645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.212 [2024-07-26 13:31:31.628703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.629761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.629806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.630131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.630589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.630640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.630997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.631040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.635660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.635713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.636067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.636108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.636407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.637688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.637740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.638857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.638898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.644104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.644175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.644536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.644585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.644841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.646277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.646328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.646715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.646758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.651316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.651924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.653127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.653178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.653427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.654347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.655569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.656712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.656762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.660614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.660668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.660707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.660747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.661008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.661610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.661659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.661698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.661738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.665224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.665273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.665311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.665349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.665645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.665785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.665827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.665866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.665910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.667164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.667212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.667249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.667287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.667591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.667728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.667769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.667807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.667848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.669013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.669063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.669102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.669146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.669560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.669703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.669744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.669783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.669821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.670956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.671007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.671047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.671088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.671379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.671516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.671558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.671596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.671645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.672896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.672946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.672987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.673025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.673327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.673465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.673507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.673545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.673590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.213 [2024-07-26 13:31:31.674780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.674841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.674883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.674921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.675173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.675310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.675362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.675401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.675439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.676743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.676798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.676839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.676877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.677123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.677266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.677309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.677348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.677399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.678597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.678645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.678683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.679044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.679303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.679440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.679482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.679520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.680869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.682013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.683695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.683736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.685321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.685730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.685867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.686237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.686279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.686885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.688035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.688410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.688476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.688828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.689077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.689221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.690509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.690553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.690910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.692058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.693537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.693580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.694294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.694559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.694695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.695810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.695858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.696216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.697405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.699050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.699108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.700608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.700857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.700993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.702277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.702321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.703611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.704879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.705792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.705836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.707104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.707389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.707523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.708821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.708865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.709730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.710831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.712283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.712327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.712688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.713066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.713211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.714692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.714737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.716224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.717454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.718750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.718794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.720097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.720348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.720482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.721853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.214 [2024-07-26 13:31:31.721901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.723543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.725053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.726431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.726475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.727809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.728056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.728237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.729576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.729618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.731221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.732376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.733461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.215 [2024-07-26 13:31:31.733505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.476 [2024-07-26 13:31:31.734501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.476 [2024-07-26 13:31:31.734783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.476 [2024-07-26 13:31:31.734916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.476 [2024-07-26 13:31:31.735622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.476 [2024-07-26 13:31:31.735667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.476 [2024-07-26 13:31:31.736948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.476 [2024-07-26 13:31:31.740819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.476 [2024-07-26 13:31:31.742194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.476 [2024-07-26 13:31:31.742238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.476 [2024-07-26 13:31:31.743624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.476 [2024-07-26 13:31:31.743908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.744046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.744485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.744528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.744882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.748690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.750136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.750185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.751632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.751911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.752044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.753374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.753420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.753777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.754878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.756180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.756224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.757517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.757766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.757900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.759563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.759604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.761247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.762408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.762774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.762814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.763903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.764221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.764352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.765657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.765702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.767007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.768118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.769557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.769601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.771076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.771401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.771536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.771895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.771935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.773337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.774450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.775518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.775564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.777006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.777260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.777394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.779057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.779100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.780692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.781901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.783159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.783204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.784506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.784795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.784928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.785798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.785842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.787116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.788185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.788563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.788604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.788962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.789215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.789351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.790775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.790818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.792533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.793676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.794979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.795023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.796330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.796636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.796769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.797129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.797176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.798404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.799478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.800717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.800761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.802384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.802634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.802768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.804320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.804372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.805999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.807181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.808467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.808511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.809806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.810095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.477 [2024-07-26 13:31:31.810232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.811171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.811219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.812542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.813648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.814084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.814125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.814482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.814731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.814865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.815828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.815874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.816677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.817765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.818132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.818192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.818546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.818791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.818924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.820342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.820387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.821055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.822220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.822270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.822309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.822665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.822915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.823048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.823093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.823145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.824325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.825515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.826289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.827658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.828284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.828533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.828669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.829346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.830472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.832109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.835763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.836939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.838616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.840166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.840456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.841566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.843052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.844663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.845019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.849567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.851242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.852487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.853057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.853313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.853891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.854922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.855881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.857128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.858578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.858944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.860077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.861032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.861288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.862531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.863492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.864591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.864948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.867339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.868666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.869632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.870576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.870960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.871417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.873059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.874236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.874866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.876332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.876701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.878287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.879581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.879876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.881569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.882862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.883383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.883739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.478 [2024-07-26 13:31:31.885443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.887026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.888346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.888836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.889146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.889795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.891013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.892707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.893491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.894922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.895708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.896718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.898171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.898461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.899504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.900915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.901275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.901644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.903858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.904231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.904590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.904633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.904883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.905603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.906752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.908347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.908399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.910155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.910215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.911905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.911944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.912202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.913767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.913824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.914559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.914603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.916005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.916058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.916431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.916479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.916832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.918449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.918500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.919241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.919296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.920752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.920804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.921169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.921210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.921461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.921911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.921963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.923507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.923556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.925803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.925858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.926229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.926276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.926691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.927151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.927203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.927567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.927617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.929763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.929818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.930182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.930228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.930670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.931119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.931188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.931547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.931598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.934128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.934187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.934542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.934581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.934918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.479 [2024-07-26 13:31:31.935376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.935428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.935784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.935836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.937519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.937574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.937929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.937969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.938329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.938778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.938830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.939192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.939231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.940894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.940949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.941312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.941351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.941679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.942130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.942188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.942543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.942581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.944899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.944962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.945330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.945377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.945739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.946197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.946247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.947087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.947131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.948657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.948722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.949077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.949120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.949374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.950801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.950851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.951477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.951522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.953288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.953343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.954957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.955005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.955300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.956013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.956064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.957401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.957447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.959333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.959387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.960341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.960386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.960634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.961850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.961903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.962856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.962901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.965799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.965860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.967107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.967161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.967519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.969043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.969101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.970808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.970861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.973346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.973402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.974748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.974789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.975083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.976370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.976422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.977339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.977382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.980074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.980128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.980640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.980696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.980945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.982534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.982592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.982950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.982994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.985614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.985668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.986690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.480 [2024-07-26 13:31:31.986735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.987031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.988087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.988146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.988501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.988539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.990223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.990291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.991793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.991846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.992095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.992549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.992600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.992956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.992994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.994930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.994984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.996155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.996200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.996507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.996956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.997005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.997371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.481 [2024-07-26 13:31:31.997416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.000178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.000240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.001953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.002004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.002384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.002834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.002883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.004044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.004088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.006372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.006425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.007187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.007228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.007566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.008016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.008066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.009397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.009444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.012181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.012240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.012597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.012635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.013075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.014314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.014364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.015532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.015576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.017548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.017601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.017956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.017995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.018258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.019729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.019790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.021169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.021208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.022590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.022954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.024208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.024254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.024598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.026232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.027059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.028691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.028734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.030191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.030243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.030282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.030323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.030608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.031823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.031876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.031914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.031952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.033149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.033211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.033250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.033288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.033619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.033763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.033807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.033845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.033883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.034980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.035033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.035071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.035123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.035394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.035532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.035581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.035621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.743 [2024-07-26 13:31:32.035662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.036798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.036846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.036884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.036924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.037344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.037484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.037529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.037579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.037624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.038846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.038894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.038945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.038988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.039244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.039380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.039430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.039472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.039513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.040674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.040722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.040759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.040797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.041122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.041267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.041310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.041348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.041386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.042590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.042637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.042675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.042713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.042987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.043122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.043169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.043218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.043255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.044351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.044398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.044435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.044473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.044757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.044889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.044930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.044970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.045018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.046150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.046198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.046236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.047799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.048099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.048240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.048291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.048331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.049968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.051126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.052431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.052475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.053999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.054257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.054395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.054756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.054796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.055329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.056415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.058067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.058119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.059149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.059439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.059573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.060878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.060922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.062442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.064412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.065873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.065919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.067352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.067601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.067736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.069131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.069179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.070561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.071677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.072532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.072576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.072933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.073267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.073399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.074677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.074721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.076026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.077068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.078367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.744 [2024-07-26 13:31:32.078410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.079712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.079961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.080097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.080527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.080571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.080924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.081952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.083458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.083502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.084502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.084750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.084885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.086404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.086455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.088007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.089264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.090220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.090264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.091513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.091796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.091931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.093481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.093525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.094158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.095259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.096818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.096871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.097235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.097592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.097725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.099125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.099172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.100582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.101678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.103150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.103197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.104615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.104874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.105007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.106364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.106408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.106764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.108022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.109315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.109358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.110880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.111203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.111338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.112621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.112664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.113962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.115067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.115456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.115498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.117052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.117305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.117442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.119044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.119112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.120602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.121648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.123188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.123233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.124201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.124636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.124769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.125203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.125247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.126506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.127540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.128467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.128512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.129774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.130067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.130209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.131746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.131790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.132408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.133693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.135216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.135263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.136785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.137038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.137178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.138484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.138527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.139836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.140915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.141290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.141331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.745 [2024-07-26 13:31:32.142015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.142274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.142412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.143928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.143977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.145107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.146296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.146664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.146704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.148244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.148584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.148720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.149384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.149431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.150826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.152233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.152880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.152922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.153921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.154176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.154312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.155372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.155415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.156353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.157499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.158964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.159008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.160156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.160474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.160612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.162104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.162162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.163741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.165040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.166156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.166205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.167723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.167973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.168106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.169424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.169467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.170445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.171614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.171980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.172032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.173693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.173955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.174089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.174461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.174509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.175848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.177086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.177689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.177742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.178907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.179163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.179300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.180310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.180355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.181518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.182918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.184271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.184318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.185580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.185909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.186045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.187703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.187744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.189360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.190616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.190667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.190705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.191876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.192125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.192266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.192313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.192353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.193727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.195014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.195393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.196941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.198228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.198556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.198692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.200311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.201792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.202158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.204687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.205057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.206560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.208158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.208556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.746 [2024-07-26 13:31:32.209012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.209683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.210859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.212257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.213644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.214008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.214804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.215989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.216244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.217136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.218322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.219552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.219908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.222376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.223423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.224608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.225603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.226031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.226486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.227931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.228298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.229687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.232535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.233376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.234379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.235904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.236265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.237329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.237692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.238052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.239658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.242221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.243605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.245131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.245496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.245808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.246265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.247905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.248275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.249801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.252301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.253685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.255232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.256641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.256914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.258285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.259585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.261123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.262174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.264903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:51.747 [2024-07-26 13:31:32.265637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.009 [2024-07-26 13:31:32.266995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.009 [2024-07-26 13:31:32.268214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.009 [2024-07-26 13:31:32.268616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.009 [2024-07-26 13:31:32.269067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.009 [2024-07-26 13:31:32.269439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.269809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.270175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.272035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.272409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.272774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.272816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.273182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.273635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.273998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.274378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.274421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.276018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.276071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.276437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.276482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.276831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.277290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.277346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.277698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.277749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.279344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.279397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.279753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.279796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.280149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.280598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.280659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.281015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.281068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.282738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.282789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.283173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.283219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.283547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.283995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.284044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.284410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.284456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.286246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.286300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.286662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.286705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.287053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.287516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.287564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.287922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.287974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.290085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.290144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.290500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.290539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.290810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.292327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.292385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.293819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.293860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.295271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.295323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.295692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.295730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.296036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.297319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.297370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.297925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.297970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.299434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.299486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.299957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.299999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.300255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.301878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.301928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.302838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.302882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.304353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.304405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.305781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.305825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.306116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.306812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.306865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.308435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.308485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.310008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.310068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.311719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.311770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.312106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.313852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.313922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.010 [2024-07-26 13:31:32.314284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.314324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.316724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.316779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.318309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.318359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.318650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.319456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.319508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.319864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.319903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.321419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.321473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.322500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.322545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.322792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.323250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.323299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.323653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.323691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.326130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.326188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.327283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.327326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.327678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.328127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.328187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.328828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.328868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.331347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.331401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.332976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.333022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.333374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.333824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.333874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.335200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.335243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.337274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.337328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.338242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.338295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.338724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.339180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.339232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.340626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.340676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.343068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.343122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.343490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.343533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.343809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.344761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.344811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.345753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.345797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.348217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.348272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.348632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.348684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.349109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.350783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.350840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.352142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.352186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.353981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.354036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.354402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.354447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.354728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.355931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.355984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.357384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.357430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.358815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.358870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.359236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.359281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.359531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.360911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.360963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.362142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.362186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.364495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.364548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.364910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.364952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.365222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.366127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.366183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.367120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.011 [2024-07-26 13:31:32.367169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.369646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.369700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.370059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.370103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.370556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.371411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.371462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.372970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.373011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.375307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.375367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.375741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.375786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.376182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.377805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.377875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.378288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.378331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.380150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.380209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.381477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.381520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.381867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.383183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.383234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.383596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.383639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.386081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.386490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.386851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.386894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.387233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.388664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.389993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.390694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.390737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.392198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.392250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.392294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.392332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.392580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.394241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.394294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.394333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.394371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.395449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.395496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.395533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.395579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.395827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.395965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.396007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.396051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.396089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.397203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.397251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.397302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.397341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.397750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.397887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.397932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.397970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.398008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.399088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.399146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.399185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.399223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.399499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.399643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.399686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.399724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.399762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.400845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.400903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.400945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.400987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.401243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.401390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.401442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.401480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.401522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.402720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.402767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.402806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.402843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.403133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.403275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.403317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.403355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.012 [2024-07-26 13:31:32.403392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.404478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.404529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.404568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.404606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.404886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.405020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.405062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.405100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.405144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.406553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.406601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.406639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.406677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.406930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.407060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.407101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.407145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.407183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.408343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.408390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.408428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.409304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.409556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.409689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.409731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.409769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.411071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.412214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.412592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.412637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.414297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.414549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.414688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.416234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.416294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.417940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.419002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.420357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.420402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.421688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.422112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.422253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.422620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.422667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.424119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.425253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.426117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.426166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.427429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.427681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.427815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.429114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.429162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.430347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.431676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.433057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.433110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.434679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.434979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.435115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.436064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.436111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.437388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.438448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.438817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.438878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.439247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.439496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.439631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.441184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.441232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.442724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.443907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.445207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.445252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.446558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.446805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.013 [2024-07-26 13:31:32.446943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.447325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.447378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.447732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.448829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.450134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.450183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.451025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.451321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.451456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.452762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.452806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.454108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.455344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.456916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.456973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.458608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.458871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.459006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.460475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.460525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.461624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.462694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.464085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.464135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.464500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.464916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.465053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.466614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.466662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.468197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.469359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.470663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.470709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.472009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.472301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.472436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.473484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.473532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.473889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.475197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.475794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.475839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.477230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.477494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.477633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.478013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.478059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.478424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.479559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.480678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.480726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.481676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.481946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.482079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.482450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.482491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.482917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.484094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.485486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.014 [2024-07-26 13:31:32.485538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.487219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.487581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.487716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.488076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.488115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.489434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.490660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.491632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.491679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.492385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.492725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.492861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.493233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.493279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.494567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.495706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.497125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.497190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.497548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.497912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.498047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.499494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.499538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.500654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.501890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.502556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.502603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.502956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.503277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.503412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.504418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.504464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.505791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.506903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.507284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.507330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.507697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.507945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.508081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.509281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.509328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.509819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.511015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.511391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.511438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.512250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.512563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.512697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.514083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.514135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.515383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.516584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.516952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.516995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.518571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.518907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.519041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.519681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.519727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.521097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.522432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.523046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.523088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.524123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.524376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.524511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.525496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.525542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.526476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.527661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.529004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.529049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.530062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.530357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.530495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.532156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.015 [2024-07-26 13:31:32.532202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.533607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.535350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.536715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.536768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.538419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.538746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.538881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.539823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.539868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.541067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.542204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.542255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.542293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.543221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.543507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.543643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.543689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.543730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.545082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.546559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.546980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.277 [2024-07-26 13:31:32.548380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.549504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.549813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.549953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.551199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.552487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.553002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.554678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.555981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.556462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.556824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.557185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.558686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.560239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.561817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.562372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.563763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.564132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.564498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.566073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.566359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.566947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.568541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.569976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.570348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.572787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.574245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.575831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.576977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.577270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.578659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.579959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.581150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.581512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.584206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.584574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.586231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.587794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.588127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.588586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.589530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.590380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.591256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.592911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.593293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.593675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.594032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.594408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.594857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.595230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.595601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.595975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.597628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.597998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.598391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.598751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.599108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.599567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.599942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.600315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.600677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.602441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.602813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.603183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.603543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.603879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.604339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.604707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.605066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.605438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.607251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.607627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.608006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.608050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.608430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.608879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.609254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.609620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.609665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.611860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.611919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.613104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.613154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.613485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.613938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.613988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.614354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.614399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.617042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.617118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.618602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.278 [2024-07-26 13:31:32.618643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.619008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.619471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.619523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.620641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.620686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.623040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.623094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.623738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.623787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.624100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.624568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.624623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.625946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.625993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.628670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.628731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.629090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.629134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.629503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.630505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.630556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.631231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.631277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.632798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.632853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.634359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.634410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.634716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.635524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.635578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.636995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.637046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.638810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.638864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.639928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.639974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.640231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.641276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.641328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.642272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.642316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.644970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.645025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.646136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.646188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.646510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.648158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.648220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.649608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.649650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.652654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.652717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.654319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.654381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.654693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.655836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.655888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.657201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.657247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.659345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.659400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.660482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.660527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.660776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.661824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.661875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.662483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.662528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.665215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.665273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.666024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.666068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.666356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.667946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.667998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.668366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.668412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.670808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.670862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.672493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.672544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.672824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.673525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.673576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.673934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.673977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.675647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.675701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.676679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.676724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.676978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.279 [2024-07-26 13:31:32.677442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.677494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.677865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.677908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.680581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.680644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.681932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.681977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.682339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.682795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.682846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.683619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.683662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.685777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.685831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.686781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.686826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.687076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.688010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.688063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.688431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.688476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.689945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.690003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.691380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.691423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.691672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.692122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.692180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.692539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.692582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.694403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.694459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.694819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.694862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.695290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.696885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.696941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.698523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.698572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.701210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.701266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.702644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.702691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.703069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.703534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.703586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.703943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.703986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.706003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.706058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.707345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.707388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.707639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.708495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.708552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.710228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.710278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.711801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.711856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.712222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.712268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.712617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.713063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.713127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.713497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.713554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.715881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.715937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.717259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.717305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.717644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.718094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.718152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.718989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.719030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.721474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.721529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.723017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.723062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.723353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.724827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.724879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.726441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.726492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.728907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.728973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.730273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.730316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.730565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.731484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.731537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.280 [2024-07-26 13:31:32.732939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.732982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.734384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.734755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.735779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.735824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.736110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.737512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.739046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.739869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.739915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.742526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.742587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.742629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.742668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.743026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.743487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.743539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.743578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.743618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.744747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.744798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.744846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.744883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.745130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.745273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.745315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.745353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.745396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.746521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.746576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.746618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.746656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.746904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.747039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.747094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.747155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.747195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.748560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.748618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.748656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.748693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.748973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.749109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.749160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.749198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.749236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.750422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.750469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.750507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.750548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.750834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.750969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.751010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.751049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.751089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.752473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.752524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.752563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.752601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.752848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.752979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.753026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.753073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.753114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.754189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.754238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.754276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.754314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.754562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.754696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.754745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.754802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.754840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.755911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.755959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.755997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.756038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.756477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.756623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.756668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.756706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.756743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.757887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.757937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.757976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.759505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.759788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.759926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.759978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.760017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.281 [2024-07-26 13:31:32.761356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.762438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.762806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.762852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.763220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.763471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.763607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.764888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.764932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.766218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.767385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.768892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.768940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.770469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.770717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.770854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.771231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.771278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.771637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.772801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.774350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.774394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.775063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.775320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.775454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.776757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.776802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.778337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.779616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.781199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.781252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.782834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.783116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.783255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.784849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.784905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.785910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.787017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.788093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.788161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.788527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.788944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.789083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.790643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.790695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.792178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.793336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.793706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.793750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.794107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.794433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.794570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.795748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.795794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.796891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.798073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.798456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.798510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.798866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.799134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.799276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.800588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.282 [2024-07-26 13:31:32.800635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.543 [2024-07-26 13:31:32.800994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.543 [2024-07-26 13:31:32.802190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.543 [2024-07-26 13:31:32.802558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.543 [2024-07-26 13:31:32.802603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.543 [2024-07-26 13:31:32.803287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.543 [2024-07-26 13:31:32.803546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.543 [2024-07-26 13:31:32.803680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.804944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.804992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.806129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.807427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.807798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.807852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.809395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.809700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.809834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.810208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.810257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.811668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.812888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.813796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.813840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.814774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.815022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.815165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.816462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.816505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.817468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.818883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.820601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.820644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.822000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.822331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.822467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.823743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.823791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.825369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.826607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.827553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.827600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.828803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.829055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.829200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.830124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.830174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.830964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.832410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.833649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.833695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.834184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.834434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.834572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.836212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.836267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.836625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.837747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.839116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.839171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.840435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.840753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.840888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.841825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.841891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.842258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.843595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.844132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.844186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.845470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.845720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.845861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.846240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.846285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.846643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.847752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.848872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.848916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.849861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.850113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.850257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.850624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.850675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.851031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.852244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.853652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.853705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.855395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.855763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.855899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.856273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.856321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.857249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.858475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.859442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.859489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.860623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.861003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.544 [2024-07-26 13:31:32.861146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.861523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.861568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.863220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.864392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.865512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.865559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.866046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.866390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.866525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.866888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.866938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.868624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.869996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.870375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.870422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.871278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.871596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.871730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.873039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.873083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.874028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.875226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.875896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.875941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.876307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.876672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.876810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.878267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.878314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.879964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.881104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.881482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.881529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.881891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.882150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.882283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.883557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.883601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.884897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.886033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.886089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.886127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.887409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.887659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.887794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.887839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.887877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.888761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.890292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.891236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.892064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.893048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.893471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.893608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.894223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.895500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.896840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.899221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.900520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.902059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.903211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.903615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.904063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.905082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.906310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.907315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.908808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.909202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.909563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.909924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.910291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.910742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.911107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.911477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.911840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.913422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.913791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.914161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.914520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.914893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.915352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.915733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.916092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.916461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.918085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.918468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.918832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.919201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.919538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.919986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.545 [2024-07-26 13:31:32.920359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.920726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.921086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.923303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.923677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.924042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.924424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.924753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.925800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.926678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.927603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.927978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.930157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.931854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.933048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.933656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.934033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.934491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.935913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.937357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.937863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.939434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.940184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.941254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.942815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.943110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.944225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.945700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.946060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.946426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.948672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.949637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.950696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.950750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.951202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.951661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.952661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.954368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.954416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.955918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.955970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.956342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.956389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.956657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.957953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.958005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.958451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.958499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.960014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.960069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.960809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.960854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.961149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.962438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.962489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.963718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.963763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.966464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.966527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.968048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.968090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.968382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.969662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.969715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.970783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.970838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.973711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.973766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.974332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.974380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.974628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.976165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.976219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.976581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.976622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.979091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.979153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.980373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.980418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.980702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.981555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.981606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.981963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.982002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.983640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.983695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.984861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.984906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.985163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.985614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.985664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.986021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.546 [2024-07-26 13:31:32.986066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.988952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.989013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.990689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.990746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.991114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.991570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.991620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.992873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.992918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.995270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.995325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.995777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.995816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.996149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.996879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.996929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.998100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:32.998152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.000445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.000504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.000861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.000900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.001211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.002511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.002579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.004095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.004144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.006778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.006833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.007717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.007758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.008115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.008646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.008703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.009867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.009913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.011459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.011513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.012691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.012736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.013027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.014671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.014723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.015082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.015125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.016525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.016581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.016937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.016976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.017313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.017765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.017815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.018178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.018233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.020388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.020449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.021774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.021819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.022069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.022531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.022580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.022937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.022976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.024305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.024407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:31:52.547 [2024-07-26 13:31:33.027048] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:31:53.115 00:31:53.115 Latency(us) 00:31:53.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.115 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:53.115 Verification LBA range: start 0x0 length 0x100 00:31:53.115 crypto_ram : 5.75 44.52 2.78 0.00 0.00 2788768.15 58300.83 2523293.29 00:31:53.115 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:53.115 Verification LBA range: start 0x100 length 0x100 00:31:53.115 crypto_ram : 5.72 44.77 2.80 0.00 0.00 2764052.89 70883.74 2483027.97 00:31:53.115 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:53.115 Verification LBA range: start 0x0 length 0x100 00:31:53.115 crypto_ram2 : 5.75 44.52 2.78 0.00 0.00 2693711.46 57881.40 2523293.29 00:31:53.115 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:53.115 Verification LBA range: start 0x100 length 0x100 00:31:53.115 crypto_ram2 : 5.72 44.76 2.80 0.00 0.00 2669590.94 70464.31 2429340.88 00:31:53.115 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:53.115 Verification LBA range: start 0x0 length 0x100 00:31:53.115 crypto_ram3 : 5.57 293.76 18.36 0.00 0.00 390337.19 60397.98 567069.90 00:31:53.115 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:53.115 Verification LBA range: start 0x100 length 0x100 00:31:53.115 crypto_ram3 : 5.53 305.76 19.11 0.00 0.00 376058.11 17616.08 563714.46 00:31:53.115 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:31:53.115 Verification LBA range: start 0x0 length 0x100 00:31:53.115 crypto_ram4 : 5.66 311.61 19.48 0.00 0.00 357707.01 1743.26 496605.59 00:31:53.115 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:31:53.115 Verification LBA range: start 0x100 length 0x100 00:31:53.115 crypto_ram4 : 5.65 323.49 20.22 0.00 0.00 344482.25 1101.00 489894.71 00:31:53.115 =================================================================================================================== 00:31:53.115 Total : 1413.19 88.32 0.00 0.00 670994.54 1101.00 2523293.29 00:31:53.373 00:31:53.373 real 0m8.806s 00:31:53.373 user 0m16.777s 00:31:53.373 sys 0m0.382s 00:31:53.373 13:31:33 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:53.373 13:31:33 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:31:53.373 ************************************ 00:31:53.373 END TEST bdev_verify_big_io 00:31:53.373 ************************************ 00:31:53.373 13:31:33 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:53.373 13:31:33 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:31:53.373 13:31:33 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:53.373 13:31:33 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:53.373 ************************************ 00:31:53.373 START TEST bdev_write_zeroes 00:31:53.373 ************************************ 00:31:53.373 13:31:33 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:53.373 [2024-07-26 13:31:33.866695] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:31:53.373 [2024-07-26 13:31:33.866747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid882892 ] 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:53.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:53.632 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:53.632 [2024-07-26 13:31:33.997703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:53.632 [2024-07-26 13:31:34.080823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.632 [2024-07-26 13:31:34.102065] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:31:53.632 [2024-07-26 13:31:34.110086] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:31:53.633 [2024-07-26 13:31:34.118104] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:31:53.891 [2024-07-26 13:31:34.221879] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:31:56.422 [2024-07-26 13:31:36.394001] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:31:56.422 [2024-07-26 13:31:36.394062] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:31:56.422 [2024-07-26 13:31:36.394075] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:56.422 [2024-07-26 13:31:36.402020] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:31:56.422 [2024-07-26 13:31:36.402037] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:31:56.422 [2024-07-26 13:31:36.402048] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:56.422 [2024-07-26 13:31:36.410041] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:31:56.422 [2024-07-26 13:31:36.410057] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:31:56.422 [2024-07-26 13:31:36.410068] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:56.422 [2024-07-26 13:31:36.418060] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:31:56.422 [2024-07-26 13:31:36.418076] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:31:56.422 [2024-07-26 13:31:36.418087] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:31:56.422 Running I/O for 1 seconds... 00:31:57.358 00:31:57.358 Latency(us) 00:31:57.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.358 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:57.358 crypto_ram : 1.02 2125.93 8.30 0.00 0.00 59803.16 4980.74 71722.60 00:31:57.358 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:57.358 crypto_ram2 : 1.02 2131.65 8.33 0.00 0.00 59336.86 4954.52 66689.43 00:31:57.358 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:57.358 crypto_ram3 : 1.02 16338.63 63.82 0.00 0.00 7719.74 2280.65 9961.47 00:31:57.358 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:31:57.358 crypto_ram4 : 1.02 16376.05 63.97 0.00 0.00 7679.29 2280.65 8074.04 00:31:57.358 =================================================================================================================== 00:31:57.358 Total : 36972.25 144.42 0.00 0.00 13697.40 2280.65 71722.60 00:31:57.358 00:31:57.358 real 0m4.055s 00:31:57.358 user 0m3.685s 00:31:57.358 sys 0m0.333s 00:31:57.358 13:31:37 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:57.358 13:31:37 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:31:57.358 ************************************ 00:31:57.358 END TEST bdev_write_zeroes 00:31:57.358 ************************************ 00:31:57.616 13:31:37 blockdev_crypto_aesni -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:57.616 13:31:37 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:31:57.616 13:31:37 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:57.616 13:31:37 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:57.616 ************************************ 00:31:57.616 START TEST bdev_json_nonenclosed 00:31:57.616 ************************************ 00:31:57.616 13:31:37 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:57.616 [2024-07-26 13:31:38.007483] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:31:57.616 [2024-07-26 13:31:38.007539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883632 ] 00:31:57.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.616 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:57.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.616 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:57.617 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:57.617 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:57.617 [2024-07-26 13:31:38.139351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.876 [2024-07-26 13:31:38.222577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.876 [2024-07-26 13:31:38.222640] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:31:57.876 [2024-07-26 13:31:38.222657] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:57.876 [2024-07-26 13:31:38.222668] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:57.876 00:31:57.876 real 0m0.359s 00:31:57.876 user 0m0.199s 00:31:57.876 sys 0m0.157s 00:31:57.876 13:31:38 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:57.876 13:31:38 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:31:57.876 ************************************ 00:31:57.876 END TEST bdev_json_nonenclosed 00:31:57.876 ************************************ 00:31:57.876 13:31:38 blockdev_crypto_aesni -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:57.876 13:31:38 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:31:57.876 13:31:38 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:57.876 13:31:38 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:57.876 ************************************ 00:31:57.876 START TEST bdev_json_nonarray 00:31:57.876 ************************************ 00:31:57.876 13:31:38 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:31:58.135 [2024-07-26 13:31:38.451734] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:31:58.135 [2024-07-26 13:31:38.451787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883719 ] 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:58.135 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.135 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:58.136 [2024-07-26 13:31:38.582888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.394 [2024-07-26 13:31:38.666211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.394 [2024-07-26 13:31:38.666276] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:31:58.394 [2024-07-26 13:31:38.666291] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:31:58.394 [2024-07-26 13:31:38.666302] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:58.394 00:31:58.394 real 0m0.357s 00:31:58.394 user 0m0.204s 00:31:58.394 sys 0m0.151s 00:31:58.394 13:31:38 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:58.394 13:31:38 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:31:58.394 ************************************ 00:31:58.394 END TEST bdev_json_nonarray 00:31:58.394 ************************************ 00:31:58.394 13:31:38 blockdev_crypto_aesni -- bdev/blockdev.sh@786 -- # [[ crypto_aesni == bdev ]] 00:31:58.394 13:31:38 blockdev_crypto_aesni -- bdev/blockdev.sh@793 -- # [[ crypto_aesni == gpt ]] 00:31:58.394 13:31:38 blockdev_crypto_aesni -- bdev/blockdev.sh@797 -- # [[ crypto_aesni == crypto_sw ]] 00:31:58.394 13:31:38 blockdev_crypto_aesni -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:31:58.394 13:31:38 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # cleanup 00:31:58.394 13:31:38 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:31:58.394 13:31:38 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:58.394 13:31:38 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:31:58.394 13:31:38 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:31:58.394 13:31:38 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:31:58.394 13:31:38 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:31:58.394 00:31:58.394 real 1m10.310s 00:31:58.394 user 2m54.973s 00:31:58.394 sys 0m8.413s 00:31:58.394 13:31:38 blockdev_crypto_aesni -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:58.394 13:31:38 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:31:58.394 ************************************ 00:31:58.394 END TEST blockdev_crypto_aesni 00:31:58.394 ************************************ 00:31:58.394 13:31:38 -- spdk/autotest.sh@362 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:31:58.394 13:31:38 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:31:58.394 13:31:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:58.394 13:31:38 -- common/autotest_common.sh@10 -- # set +x 00:31:58.394 ************************************ 00:31:58.394 START TEST blockdev_crypto_sw 00:31:58.394 ************************************ 00:31:58.394 13:31:38 blockdev_crypto_sw -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:31:58.653 * Looking for test storage... 00:31:58.653 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:31:58.653 13:31:38 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:31:58.653 13:31:38 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:31:58.653 13:31:38 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:58.653 13:31:38 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:31:58.653 13:31:38 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:31:58.653 13:31:38 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:31:58.653 13:31:38 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:31:58.653 13:31:38 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:31:58.653 13:31:38 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:31:58.653 13:31:38 blockdev_crypto_sw -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:31:58.653 13:31:38 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:31:58.653 13:31:38 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # uname -s 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@681 -- # test_type=crypto_sw 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # crypto_device= 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # dek= 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # env_ctx= 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == bdev ]] 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == crypto_* ]] 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=883783 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:31:58.653 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 883783 00:31:58.653 13:31:39 blockdev_crypto_sw -- common/autotest_common.sh@831 -- # '[' -z 883783 ']' 00:31:58.653 13:31:39 blockdev_crypto_sw -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:58.653 13:31:39 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:58.653 13:31:39 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:58.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:58.653 13:31:39 blockdev_crypto_sw -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:58.653 13:31:39 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:58.653 [2024-07-26 13:31:39.078076] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:31:58.653 [2024-07-26 13:31:39.078148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883783 ] 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:58.653 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.653 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:58.654 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:58.654 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:58.914 [2024-07-26 13:31:39.212491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.914 [2024-07-26 13:31:39.298060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.517 13:31:39 blockdev_crypto_sw -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:59.517 13:31:39 blockdev_crypto_sw -- common/autotest_common.sh@864 -- # return 0 00:31:59.517 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:31:59.517 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@710 -- # setup_crypto_sw_conf 00:31:59.517 13:31:39 blockdev_crypto_sw -- bdev/blockdev.sh@192 -- # rpc_cmd 00:31:59.517 13:31:39 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.517 13:31:39 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:59.776 Malloc0 00:31:59.776 Malloc1 00:31:59.776 true 00:31:59.776 true 00:31:59.776 true 00:31:59.776 [2024-07-26 13:31:40.236993] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:31:59.776 crypto_ram 00:31:59.776 [2024-07-26 13:31:40.245019] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:31:59.776 crypto_ram2 00:31:59.776 [2024-07-26 13:31:40.253040] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:31:59.776 crypto_ram3 00:31:59.776 [ 00:31:59.776 { 00:31:59.776 "name": "Malloc1", 00:31:59.776 "aliases": [ 00:31:59.776 "0ab86f21-946e-468b-81ca-4a21a0b9d740" 00:31:59.776 ], 00:31:59.776 "product_name": "Malloc disk", 00:31:59.776 "block_size": 4096, 00:31:59.776 "num_blocks": 4096, 00:31:59.776 "uuid": "0ab86f21-946e-468b-81ca-4a21a0b9d740", 00:31:59.776 "assigned_rate_limits": { 00:31:59.776 "rw_ios_per_sec": 0, 00:31:59.776 "rw_mbytes_per_sec": 0, 00:31:59.776 "r_mbytes_per_sec": 0, 00:31:59.776 "w_mbytes_per_sec": 0 00:31:59.776 }, 00:31:59.776 "claimed": true, 00:31:59.776 "claim_type": "exclusive_write", 00:31:59.776 "zoned": false, 00:31:59.776 "supported_io_types": { 00:31:59.776 "read": true, 00:31:59.776 "write": true, 00:31:59.776 "unmap": true, 00:31:59.776 "flush": true, 00:31:59.776 "reset": true, 00:31:59.776 "nvme_admin": false, 00:31:59.776 "nvme_io": false, 00:31:59.776 "nvme_io_md": false, 00:31:59.776 "write_zeroes": true, 00:31:59.776 "zcopy": true, 00:31:59.776 "get_zone_info": false, 00:31:59.776 "zone_management": false, 00:31:59.776 "zone_append": false, 00:31:59.776 "compare": false, 00:31:59.776 "compare_and_write": false, 00:31:59.776 "abort": true, 00:31:59.776 "seek_hole": false, 00:31:59.776 "seek_data": false, 00:31:59.776 "copy": true, 00:31:59.776 "nvme_iov_md": false 00:31:59.776 }, 00:31:59.776 "memory_domains": [ 00:31:59.776 { 00:31:59.776 "dma_device_id": "system", 00:31:59.776 "dma_device_type": 1 00:31:59.776 }, 00:31:59.776 { 00:31:59.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:59.776 "dma_device_type": 2 00:31:59.776 } 00:31:59.776 ], 00:31:59.776 "driver_specific": {} 00:31:59.776 } 00:31:59.776 ] 00:31:59.776 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.776 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:31:59.776 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.776 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:31:59.776 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:59.776 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # cat 00:31:59.776 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:31:59.776 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:59.776 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:32:00.035 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:32:00.035 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:00.035 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:32:00.035 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "17866dda-d52e-50de-9e17-5c1ab02bf17b"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "17866dda-d52e-50de-9e17-5c1ab02bf17b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "411e44e4-9ba1-5675-b3c8-3ffc938cfd57"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "411e44e4-9ba1-5675-b3c8-3ffc938cfd57",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:32:00.035 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r .name 00:32:00.035 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:32:00.035 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:32:00.035 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:32:00.035 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # killprocess 883783 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@950 -- # '[' -z 883783 ']' 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # kill -0 883783 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # uname 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 883783 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@968 -- # echo 'killing process with pid 883783' 00:32:00.035 killing process with pid 883783 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@969 -- # kill 883783 00:32:00.035 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@974 -- # wait 883783 00:32:00.602 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:00.602 13:31:40 blockdev_crypto_sw -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:32:00.603 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:00.603 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:00.603 13:31:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:00.603 ************************************ 00:32:00.603 START TEST bdev_hello_world 00:32:00.603 ************************************ 00:32:00.603 13:31:40 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:32:00.603 [2024-07-26 13:31:40.967700] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:32:00.603 [2024-07-26 13:31:40.967755] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884139 ] 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:00.603 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:00.603 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:00.603 [2024-07-26 13:31:41.098618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.862 [2024-07-26 13:31:41.186448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.862 [2024-07-26 13:31:41.353277] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:00.862 [2024-07-26 13:31:41.353336] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:00.862 [2024-07-26 13:31:41.353350] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:00.862 [2024-07-26 13:31:41.361311] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:00.862 [2024-07-26 13:31:41.361328] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:00.862 [2024-07-26 13:31:41.361339] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:00.862 [2024-07-26 13:31:41.369318] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:00.862 [2024-07-26 13:31:41.369334] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:00.862 [2024-07-26 13:31:41.369344] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:01.125 [2024-07-26 13:31:41.409042] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:32:01.125 [2024-07-26 13:31:41.409073] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:32:01.125 [2024-07-26 13:31:41.409090] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:32:01.125 [2024-07-26 13:31:41.410320] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:32:01.125 [2024-07-26 13:31:41.410390] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:32:01.125 [2024-07-26 13:31:41.410405] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:32:01.125 [2024-07-26 13:31:41.410436] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:32:01.125 00:32:01.125 [2024-07-26 13:31:41.410452] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:32:01.125 00:32:01.125 real 0m0.689s 00:32:01.125 user 0m0.455s 00:32:01.125 sys 0m0.214s 00:32:01.125 13:31:41 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:01.125 13:31:41 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:32:01.125 ************************************ 00:32:01.125 END TEST bdev_hello_world 00:32:01.125 ************************************ 00:32:01.125 13:31:41 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:32:01.125 13:31:41 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:01.125 13:31:41 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:01.125 13:31:41 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:01.387 ************************************ 00:32:01.387 START TEST bdev_bounds 00:32:01.387 ************************************ 00:32:01.387 13:31:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:32:01.387 13:31:41 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=884349 00:32:01.387 13:31:41 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:32:01.387 13:31:41 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:32:01.387 13:31:41 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 884349' 00:32:01.387 Process bdevio pid: 884349 00:32:01.387 13:31:41 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 884349 00:32:01.387 13:31:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 884349 ']' 00:32:01.387 13:31:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:01.387 13:31:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:01.387 13:31:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:01.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:01.387 13:31:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:01.387 13:31:41 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:01.387 [2024-07-26 13:31:41.740837] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:32:01.387 [2024-07-26 13:31:41.740892] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid884349 ] 00:32:01.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.387 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:01.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.387 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:01.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.387 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:01.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.387 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:01.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.387 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:01.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.387 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:01.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.387 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:01.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.387 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:01.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.387 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:01.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.387 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:01.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.387 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:01.387 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.387 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:01.388 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:01.388 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:01.388 [2024-07-26 13:31:41.873559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:01.645 [2024-07-26 13:31:41.962801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.645 [2024-07-26 13:31:41.962896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:01.645 [2024-07-26 13:31:41.962900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.645 [2024-07-26 13:31:42.120929] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:01.645 [2024-07-26 13:31:42.120983] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:01.645 [2024-07-26 13:31:42.120997] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:01.646 [2024-07-26 13:31:42.128949] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:01.646 [2024-07-26 13:31:42.128965] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:01.646 [2024-07-26 13:31:42.128976] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:01.646 [2024-07-26 13:31:42.136972] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:01.646 [2024-07-26 13:31:42.136988] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:01.646 [2024-07-26 13:31:42.136998] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:02.210 13:31:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:02.211 13:31:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:32:02.211 13:31:42 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:32:02.468 I/O targets: 00:32:02.468 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:32:02.468 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:32:02.468 00:32:02.468 00:32:02.468 CUnit - A unit testing framework for C - Version 2.1-3 00:32:02.468 http://cunit.sourceforge.net/ 00:32:02.468 00:32:02.468 00:32:02.468 Suite: bdevio tests on: crypto_ram3 00:32:02.468 Test: blockdev write read block ...passed 00:32:02.468 Test: blockdev write zeroes read block ...passed 00:32:02.468 Test: blockdev write zeroes read no split ...passed 00:32:02.468 Test: blockdev write zeroes read split ...passed 00:32:02.468 Test: blockdev write zeroes read split partial ...passed 00:32:02.468 Test: blockdev reset ...passed 00:32:02.468 Test: blockdev write read 8 blocks ...passed 00:32:02.469 Test: blockdev write read size > 128k ...passed 00:32:02.469 Test: blockdev write read invalid size ...passed 00:32:02.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:02.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:02.469 Test: blockdev write read max offset ...passed 00:32:02.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:02.469 Test: blockdev writev readv 8 blocks ...passed 00:32:02.469 Test: blockdev writev readv 30 x 1block ...passed 00:32:02.469 Test: blockdev writev readv block ...passed 00:32:02.469 Test: blockdev writev readv size > 128k ...passed 00:32:02.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:02.469 Test: blockdev comparev and writev ...passed 00:32:02.469 Test: blockdev nvme passthru rw ...passed 00:32:02.469 Test: blockdev nvme passthru vendor specific ...passed 00:32:02.469 Test: blockdev nvme admin passthru ...passed 00:32:02.469 Test: blockdev copy ...passed 00:32:02.469 Suite: bdevio tests on: crypto_ram 00:32:02.469 Test: blockdev write read block ...passed 00:32:02.469 Test: blockdev write zeroes read block ...passed 00:32:02.469 Test: blockdev write zeroes read no split ...passed 00:32:02.469 Test: blockdev write zeroes read split ...passed 00:32:02.469 Test: blockdev write zeroes read split partial ...passed 00:32:02.469 Test: blockdev reset ...passed 00:32:02.469 Test: blockdev write read 8 blocks ...passed 00:32:02.469 Test: blockdev write read size > 128k ...passed 00:32:02.469 Test: blockdev write read invalid size ...passed 00:32:02.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:02.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:02.469 Test: blockdev write read max offset ...passed 00:32:02.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:02.469 Test: blockdev writev readv 8 blocks ...passed 00:32:02.469 Test: blockdev writev readv 30 x 1block ...passed 00:32:02.469 Test: blockdev writev readv block ...passed 00:32:02.469 Test: blockdev writev readv size > 128k ...passed 00:32:02.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:02.469 Test: blockdev comparev and writev ...passed 00:32:02.469 Test: blockdev nvme passthru rw ...passed 00:32:02.469 Test: blockdev nvme passthru vendor specific ...passed 00:32:02.469 Test: blockdev nvme admin passthru ...passed 00:32:02.469 Test: blockdev copy ...passed 00:32:02.469 00:32:02.469 Run Summary: Type Total Ran Passed Failed Inactive 00:32:02.469 suites 2 2 n/a 0 0 00:32:02.469 tests 46 46 46 0 0 00:32:02.469 asserts 260 260 260 0 n/a 00:32:02.469 00:32:02.469 Elapsed time = 0.078 seconds 00:32:02.469 0 00:32:02.469 13:31:42 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 884349 00:32:02.469 13:31:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 884349 ']' 00:32:02.469 13:31:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 884349 00:32:02.469 13:31:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:32:02.469 13:31:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:02.469 13:31:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 884349 00:32:02.469 13:31:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:02.469 13:31:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:02.469 13:31:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 884349' 00:32:02.469 killing process with pid 884349 00:32:02.469 13:31:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@969 -- # kill 884349 00:32:02.469 13:31:42 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@974 -- # wait 884349 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:32:02.728 00:32:02.728 real 0m1.377s 00:32:02.728 user 0m3.617s 00:32:02.728 sys 0m0.354s 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:02.728 ************************************ 00:32:02.728 END TEST bdev_bounds 00:32:02.728 ************************************ 00:32:02.728 13:31:43 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:32:02.728 13:31:43 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:02.728 13:31:43 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:02.728 13:31:43 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:02.728 ************************************ 00:32:02.728 START TEST bdev_nbd 00:32:02.728 ************************************ 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=884635 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 884635 /var/tmp/spdk-nbd.sock 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 884635 ']' 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:32:02.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:02.728 13:31:43 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:02.728 [2024-07-26 13:31:43.210469] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:32:02.728 [2024-07-26 13:31:43.210535] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:02.986 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:02.986 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:02.986 [2024-07-26 13:31:43.343126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.986 [2024-07-26 13:31:43.424849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.244 [2024-07-26 13:31:43.594749] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:03.244 [2024-07-26 13:31:43.594815] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:03.244 [2024-07-26 13:31:43.594829] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:03.244 [2024-07-26 13:31:43.602767] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:03.244 [2024-07-26 13:31:43.602783] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:03.244 [2024-07-26 13:31:43.602794] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:03.244 [2024-07-26 13:31:43.610788] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:03.244 [2024-07-26 13:31:43.610804] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:03.244 [2024-07-26 13:31:43.610814] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:32:03.810 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:32:04.069 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:32:04.069 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:32:04.069 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:32:04.069 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:04.069 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:04.069 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:04.069 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:04.069 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:04.070 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:04.070 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:04.070 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:04.070 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:04.070 1+0 records in 00:32:04.070 1+0 records out 00:32:04.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264864 s, 15.5 MB/s 00:32:04.070 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:04.070 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:04.070 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:04.070 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:04.070 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:04.070 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:04.070 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:32:04.070 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:04.328 1+0 records in 00:32:04.328 1+0 records out 00:32:04.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286584 s, 14.3 MB/s 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:32:04.328 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:04.587 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:32:04.587 { 00:32:04.587 "nbd_device": "/dev/nbd0", 00:32:04.587 "bdev_name": "crypto_ram" 00:32:04.587 }, 00:32:04.587 { 00:32:04.587 "nbd_device": "/dev/nbd1", 00:32:04.587 "bdev_name": "crypto_ram3" 00:32:04.587 } 00:32:04.587 ]' 00:32:04.587 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:32:04.587 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:32:04.587 { 00:32:04.587 "nbd_device": "/dev/nbd0", 00:32:04.587 "bdev_name": "crypto_ram" 00:32:04.587 }, 00:32:04.587 { 00:32:04.587 "nbd_device": "/dev/nbd1", 00:32:04.587 "bdev_name": "crypto_ram3" 00:32:04.587 } 00:32:04.587 ]' 00:32:04.587 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:32:04.587 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:32:04.587 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:04.587 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:04.587 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:04.587 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:04.587 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:04.587 13:31:44 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:04.845 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:04.845 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:04.845 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:04.845 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:04.845 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:04.845 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:04.845 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:04.845 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:04.845 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:04.845 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:05.104 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:05.104 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:05.104 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:05.104 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:05.104 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:05.104 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:05.104 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:05.104 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:05.104 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:05.104 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:05.104 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:05.362 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:32:05.621 /dev/nbd0 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:05.621 1+0 records in 00:32:05.621 1+0 records out 00:32:05.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226097 s, 18.1 MB/s 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:05.621 13:31:45 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:32:05.879 /dev/nbd1 00:32:05.879 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:05.879 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:05.879 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:05.880 1+0 records in 00:32:05.880 1+0 records out 00:32:05.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000317038 s, 12.9 MB/s 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:05.880 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:32:06.139 { 00:32:06.139 "nbd_device": "/dev/nbd0", 00:32:06.139 "bdev_name": "crypto_ram" 00:32:06.139 }, 00:32:06.139 { 00:32:06.139 "nbd_device": "/dev/nbd1", 00:32:06.139 "bdev_name": "crypto_ram3" 00:32:06.139 } 00:32:06.139 ]' 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:32:06.139 { 00:32:06.139 "nbd_device": "/dev/nbd0", 00:32:06.139 "bdev_name": "crypto_ram" 00:32:06.139 }, 00:32:06.139 { 00:32:06.139 "nbd_device": "/dev/nbd1", 00:32:06.139 "bdev_name": "crypto_ram3" 00:32:06.139 } 00:32:06.139 ]' 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:32:06.139 /dev/nbd1' 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:32:06.139 /dev/nbd1' 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:32:06.139 256+0 records in 00:32:06.139 256+0 records out 00:32:06.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114499 s, 91.6 MB/s 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:32:06.139 256+0 records in 00:32:06.139 256+0 records out 00:32:06.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181666 s, 57.7 MB/s 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:32:06.139 256+0 records in 00:32:06.139 256+0 records out 00:32:06.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0366746 s, 28.6 MB/s 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:06.139 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:06.398 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:06.398 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:06.398 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:06.398 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:06.398 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:06.398 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:06.398 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:06.398 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:06.398 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:06.398 13:31:46 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:32:06.655 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:06.655 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:06.655 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:06.655 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:06.655 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:06.655 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:06.655 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:06.655 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:06.655 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:32:06.655 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:06.655 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:32:06.913 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:32:07.172 malloc_lvol_verify 00:32:07.172 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:32:07.431 6c88cbea-36e7-4579-9d16-61c407135f4b 00:32:07.431 13:31:47 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:32:08.000 70dc5ff3-def0-4518-90c8-86fa1521ba19 00:32:08.000 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:32:08.000 /dev/nbd0 00:32:08.000 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:32:08.000 mke2fs 1.46.5 (30-Dec-2021) 00:32:08.000 Discarding device blocks: 0/4096 done 00:32:08.000 Creating filesystem with 4096 1k blocks and 1024 inodes 00:32:08.000 00:32:08.000 Allocating group tables: 0/1 done 00:32:08.000 Writing inode tables: 0/1 done 00:32:08.000 Creating journal (1024 blocks): done 00:32:08.000 Writing superblocks and filesystem accounting information: 0/1 done 00:32:08.000 00:32:08.000 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:32:08.000 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:32:08.000 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:32:08.000 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:08.000 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:08.000 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:32:08.000 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:08.000 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 884635 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 884635 ']' 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 884635 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 884635 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 884635' 00:32:08.259 killing process with pid 884635 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@969 -- # kill 884635 00:32:08.259 13:31:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@974 -- # wait 884635 00:32:08.518 13:31:48 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:32:08.518 00:32:08.518 real 0m5.843s 00:32:08.518 user 0m8.324s 00:32:08.518 sys 0m2.334s 00:32:08.518 13:31:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:08.518 13:31:48 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:08.518 ************************************ 00:32:08.518 END TEST bdev_nbd 00:32:08.518 ************************************ 00:32:08.518 13:31:49 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:32:08.518 13:31:49 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = nvme ']' 00:32:08.518 13:31:49 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = gpt ']' 00:32:08.518 13:31:49 blockdev_crypto_sw -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:32:08.518 13:31:49 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:08.518 13:31:49 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:08.518 13:31:49 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:08.777 ************************************ 00:32:08.777 START TEST bdev_fio 00:32:08.777 ************************************ 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:32:08.777 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:08.777 13:31:49 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:08.777 ************************************ 00:32:08.777 START TEST bdev_fio_rw_verify 00:32:08.777 ************************************ 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:08.778 13:31:49 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:09.347 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:09.347 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:09.347 fio-3.35 00:32:09.347 Starting 2 threads 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:09.347 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:09.347 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:21.555 00:32:21.555 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=885969: Fri Jul 26 13:32:00 2024 00:32:21.555 read: IOPS=19.4k, BW=75.6MiB/s (79.3MB/s)(757MiB/10001msec) 00:32:21.555 slat (usec): min=10, max=162, avg=22.81, stdev=13.07 00:32:21.555 clat (usec): min=15, max=773, avg=164.46, stdev=98.90 00:32:21.555 lat (usec): min=26, max=815, avg=187.27, stdev=108.60 00:32:21.555 clat percentiles (usec): 00:32:21.555 | 50.000th=[ 135], 99.000th=[ 469], 99.900th=[ 562], 99.990th=[ 619], 00:32:21.555 | 99.999th=[ 750] 00:32:21.555 write: IOPS=23.3k, BW=90.9MiB/s (95.3MB/s)(861MiB/9473msec); 0 zone resets 00:32:21.555 slat (usec): min=12, max=1782, avg=38.17, stdev=21.34 00:32:21.555 clat (usec): min=24, max=2068, avg=219.52, stdev=135.98 00:32:21.555 lat (usec): min=43, max=2096, avg=257.68, stdev=150.67 00:32:21.555 clat percentiles (usec): 00:32:21.555 | 50.000th=[ 188], 99.000th=[ 627], 99.900th=[ 668], 99.990th=[ 685], 00:32:21.555 | 99.999th=[ 758] 00:32:21.555 bw ( KiB/s): min=83480, max=95240, per=94.93%, avg=88341.47, stdev=1658.56, samples=38 00:32:21.555 iops : min=20870, max=23810, avg=22085.37, stdev=414.64, samples=38 00:32:21.555 lat (usec) : 20=0.01%, 50=0.02%, 100=19.30%, 250=59.23%, 500=17.34% 00:32:21.555 lat (usec) : 750=4.09%, 1000=0.01% 00:32:21.555 lat (msec) : 4=0.01% 00:32:21.555 cpu : usr=99.54%, sys=0.00%, ctx=24, majf=0, minf=461 00:32:21.555 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:21.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.555 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:21.555 issued rwts: total=193682,220397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:21.555 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:21.555 00:32:21.555 Run status group 0 (all jobs): 00:32:21.555 READ: bw=75.6MiB/s (79.3MB/s), 75.6MiB/s-75.6MiB/s (79.3MB/s-79.3MB/s), io=757MiB (793MB), run=10001-10001msec 00:32:21.555 WRITE: bw=90.9MiB/s (95.3MB/s), 90.9MiB/s-90.9MiB/s (95.3MB/s-95.3MB/s), io=861MiB (903MB), run=9473-9473msec 00:32:21.555 00:32:21.555 real 0m11.173s 00:32:21.555 user 0m31.156s 00:32:21.555 sys 0m0.424s 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:32:21.555 ************************************ 00:32:21.555 END TEST bdev_fio_rw_verify 00:32:21.555 ************************************ 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:32:21.555 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "17866dda-d52e-50de-9e17-5c1ab02bf17b"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "17866dda-d52e-50de-9e17-5c1ab02bf17b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "411e44e4-9ba1-5675-b3c8-3ffc938cfd57"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "411e44e4-9ba1-5675-b3c8-3ffc938cfd57",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:32:21.556 crypto_ram3 ]] 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "17866dda-d52e-50de-9e17-5c1ab02bf17b"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "17866dda-d52e-50de-9e17-5c1ab02bf17b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "411e44e4-9ba1-5675-b3c8-3ffc938cfd57"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "411e44e4-9ba1-5675-b3c8-3ffc938cfd57",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:21.556 ************************************ 00:32:21.556 START TEST bdev_fio_trim 00:32:21.556 ************************************ 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:21.556 13:32:00 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:32:21.556 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:21.556 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:21.556 fio-3.35 00:32:21.556 Starting 2 threads 00:32:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.556 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.556 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:21.556 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.556 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:21.557 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:21.557 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:31.591 00:32:31.591 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=887959: Fri Jul 26 13:32:11 2024 00:32:31.591 write: IOPS=27.0k, BW=105MiB/s (111MB/s)(1054MiB/10001msec); 0 zone resets 00:32:31.591 slat (usec): min=16, max=1865, avg=32.40, stdev=10.20 00:32:31.591 clat (usec): min=98, max=2168, avg=244.33, stdev=58.45 00:32:31.591 lat (usec): min=136, max=2196, avg=276.73, stdev=55.98 00:32:31.591 clat percentiles (usec): 00:32:31.591 | 50.000th=[ 253], 99.000th=[ 338], 99.900th=[ 359], 99.990th=[ 635], 00:32:31.591 | 99.999th=[ 2114] 00:32:31.591 bw ( KiB/s): min=106512, max=108256, per=100.00%, avg=107951.58, stdev=247.72, samples=38 00:32:31.591 iops : min=26628, max=27064, avg=26987.89, stdev=61.93, samples=38 00:32:31.591 trim: IOPS=27.0k, BW=105MiB/s (111MB/s)(1054MiB/10001msec); 0 zone resets 00:32:31.591 slat (usec): min=7, max=299, avg=14.55, stdev= 5.13 00:32:31.591 clat (usec): min=41, max=1960, avg=163.07, stdev=91.77 00:32:31.591 lat (usec): min=48, max=1969, avg=177.61, stdev=94.94 00:32:31.591 clat percentiles (usec): 00:32:31.591 | 50.000th=[ 133], 99.000th=[ 371], 99.900th=[ 383], 99.990th=[ 400], 00:32:31.591 | 99.999th=[ 1074] 00:32:31.591 bw ( KiB/s): min=106536, max=108256, per=100.00%, avg=107952.84, stdev=243.94, samples=38 00:32:31.591 iops : min=26634, max=27064, avg=26988.21, stdev=60.98, samples=38 00:32:31.591 lat (usec) : 50=0.86%, 100=14.35%, 250=48.59%, 500=36.19%, 750=0.01% 00:32:31.591 lat (usec) : 1000=0.01% 00:32:31.591 lat (msec) : 2=0.01%, 4=0.01% 00:32:31.591 cpu : usr=99.49%, sys=0.00%, ctx=25, majf=0, minf=267 00:32:31.591 IO depths : 1=5.0%, 2=13.7%, 4=65.1%, 8=16.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:31.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.591 complete : 0=0.0%, 4=86.0%, 8=14.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:31.591 issued rwts: total=0,269883,269884,0 short=0,0,0,0 dropped=0,0,0,0 00:32:31.591 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:31.591 00:32:31.591 Run status group 0 (all jobs): 00:32:31.591 WRITE: bw=105MiB/s (111MB/s), 105MiB/s-105MiB/s (111MB/s-111MB/s), io=1054MiB (1105MB), run=10001-10001msec 00:32:31.591 TRIM: bw=105MiB/s (111MB/s), 105MiB/s-105MiB/s (111MB/s-111MB/s), io=1054MiB (1105MB), run=10001-10001msec 00:32:31.591 00:32:31.591 real 0m11.104s 00:32:31.591 user 0m29.889s 00:32:31.591 sys 0m0.389s 00:32:31.591 13:32:11 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:31.591 13:32:11 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:32:31.591 ************************************ 00:32:31.591 END TEST bdev_fio_trim 00:32:31.591 ************************************ 00:32:31.591 13:32:11 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:32:31.591 13:32:11 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:32:31.591 13:32:11 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:32:31.591 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:32:31.591 13:32:11 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:32:31.591 00:32:31.591 real 0m22.610s 00:32:31.591 user 1m1.190s 00:32:31.591 sys 0m1.018s 00:32:31.591 13:32:11 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:31.591 13:32:11 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:31.591 ************************************ 00:32:31.591 END TEST bdev_fio 00:32:31.591 ************************************ 00:32:31.591 13:32:11 blockdev_crypto_sw -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:31.591 13:32:11 blockdev_crypto_sw -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:31.591 13:32:11 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:32:31.591 13:32:11 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:31.591 13:32:11 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:31.591 ************************************ 00:32:31.591 START TEST bdev_verify 00:32:31.591 ************************************ 00:32:31.591 13:32:11 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:31.591 [2024-07-26 13:32:11.808390] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:32:31.591 [2024-07-26 13:32:11.808447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid889585 ] 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.591 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:31.591 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:31.592 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:31.592 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:31.592 [2024-07-26 13:32:11.939487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:31.592 [2024-07-26 13:32:12.026115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.592 [2024-07-26 13:32:12.026121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:31.851 [2024-07-26 13:32:12.183952] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:31.851 [2024-07-26 13:32:12.184005] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:31.851 [2024-07-26 13:32:12.184019] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:31.851 [2024-07-26 13:32:12.191973] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:31.851 [2024-07-26 13:32:12.191994] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:31.851 [2024-07-26 13:32:12.192005] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:31.851 [2024-07-26 13:32:12.199997] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:31.851 [2024-07-26 13:32:12.200013] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:31.851 [2024-07-26 13:32:12.200023] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:31.851 Running I/O for 5 seconds... 00:32:37.122 00:32:37.122 Latency(us) 00:32:37.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.122 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:37.122 Verification LBA range: start 0x0 length 0x800 00:32:37.122 crypto_ram : 5.03 5627.41 21.98 0.00 0.00 22649.00 1441.79 28940.70 00:32:37.122 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:37.122 Verification LBA range: start 0x800 length 0x800 00:32:37.122 crypto_ram : 5.03 5628.41 21.99 0.00 0.00 22645.57 1690.83 28730.98 00:32:37.122 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:37.122 Verification LBA range: start 0x0 length 0x800 00:32:37.122 crypto_ram3 : 5.03 2822.50 11.03 0.00 0.00 45097.47 1690.83 33344.72 00:32:37.122 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:37.122 Verification LBA range: start 0x800 length 0x800 00:32:37.122 crypto_ram3 : 5.03 2823.03 11.03 0.00 0.00 45086.12 1913.65 33344.72 00:32:37.122 =================================================================================================================== 00:32:37.122 Total : 16901.34 66.02 0.00 0.00 30151.32 1441.79 33344.72 00:32:37.122 00:32:37.122 real 0m5.746s 00:32:37.122 user 0m10.860s 00:32:37.122 sys 0m0.213s 00:32:37.122 13:32:17 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:37.122 13:32:17 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:32:37.122 ************************************ 00:32:37.122 END TEST bdev_verify 00:32:37.122 ************************************ 00:32:37.122 13:32:17 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:37.122 13:32:17 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:32:37.122 13:32:17 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:37.122 13:32:17 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:37.122 ************************************ 00:32:37.122 START TEST bdev_verify_big_io 00:32:37.122 ************************************ 00:32:37.122 13:32:17 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:37.122 [2024-07-26 13:32:17.635298] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:32:37.122 [2024-07-26 13:32:17.635355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890641 ] 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:37.382 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:37.382 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:37.382 [2024-07-26 13:32:17.766204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:37.382 [2024-07-26 13:32:17.853914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.382 [2024-07-26 13:32:17.853919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.642 [2024-07-26 13:32:18.027545] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:37.642 [2024-07-26 13:32:18.027606] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:37.642 [2024-07-26 13:32:18.027619] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:37.642 [2024-07-26 13:32:18.035567] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:37.642 [2024-07-26 13:32:18.035584] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:37.642 [2024-07-26 13:32:18.035599] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:37.642 [2024-07-26 13:32:18.043589] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:37.642 [2024-07-26 13:32:18.043606] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:37.642 [2024-07-26 13:32:18.043616] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:37.642 Running I/O for 5 seconds... 00:32:42.956 00:32:42.956 Latency(us) 00:32:42.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.956 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:42.956 Verification LBA range: start 0x0 length 0x80 00:32:42.956 crypto_ram : 5.07 454.11 28.38 0.00 0.00 274885.25 5269.09 372454.20 00:32:42.956 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:42.956 Verification LBA range: start 0x80 length 0x80 00:32:42.956 crypto_ram : 5.06 454.93 28.43 0.00 0.00 274428.09 6107.96 370776.47 00:32:42.956 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:42.956 Verification LBA range: start 0x0 length 0x80 00:32:42.956 crypto_ram3 : 5.25 243.79 15.24 0.00 0.00 493093.10 5269.09 385875.97 00:32:42.956 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:42.956 Verification LBA range: start 0x80 length 0x80 00:32:42.956 crypto_ram3 : 5.24 244.24 15.27 0.00 0.00 492219.24 5924.45 385875.97 00:32:42.956 =================================================================================================================== 00:32:42.956 Total : 1397.08 87.32 0.00 0.00 352513.63 5269.09 385875.97 00:32:43.214 00:32:43.214 real 0m5.988s 00:32:43.214 user 0m11.308s 00:32:43.214 sys 0m0.232s 00:32:43.214 13:32:23 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:43.214 13:32:23 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:32:43.214 ************************************ 00:32:43.214 END TEST bdev_verify_big_io 00:32:43.214 ************************************ 00:32:43.214 13:32:23 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:43.214 13:32:23 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:32:43.214 13:32:23 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:43.214 13:32:23 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:43.214 ************************************ 00:32:43.214 START TEST bdev_write_zeroes 00:32:43.214 ************************************ 00:32:43.214 13:32:23 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:43.214 [2024-07-26 13:32:23.704882] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:32:43.214 [2024-07-26 13:32:23.704937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891654 ] 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:43.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:43.473 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:43.473 [2024-07-26 13:32:23.836937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.473 [2024-07-26 13:32:23.921280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.733 [2024-07-26 13:32:24.080472] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:43.733 [2024-07-26 13:32:24.080532] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:43.733 [2024-07-26 13:32:24.080546] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:43.733 [2024-07-26 13:32:24.088490] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:32:43.733 [2024-07-26 13:32:24.088507] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:43.733 [2024-07-26 13:32:24.088518] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:43.733 [2024-07-26 13:32:24.096512] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:32:43.733 [2024-07-26 13:32:24.096532] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:32:43.733 [2024-07-26 13:32:24.096542] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:43.733 Running I/O for 1 seconds... 00:32:44.671 00:32:44.671 Latency(us) 00:32:44.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.671 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:44.671 crypto_ram : 1.01 28664.83 111.97 0.00 0.00 4453.97 1205.86 6186.60 00:32:44.671 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:44.671 crypto_ram3 : 1.01 14305.72 55.88 0.00 0.00 8881.98 5531.24 9279.90 00:32:44.671 =================================================================================================================== 00:32:44.671 Total : 42970.55 167.85 0.00 0.00 5929.98 1205.86 9279.90 00:32:44.931 00:32:44.931 real 0m1.700s 00:32:44.931 user 0m1.457s 00:32:44.931 sys 0m0.227s 00:32:44.931 13:32:25 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:44.931 13:32:25 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:32:44.931 ************************************ 00:32:44.931 END TEST bdev_write_zeroes 00:32:44.931 ************************************ 00:32:44.931 13:32:25 blockdev_crypto_sw -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:44.931 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:32:44.931 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:44.931 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:44.931 ************************************ 00:32:44.931 START TEST bdev_json_nonenclosed 00:32:44.931 ************************************ 00:32:44.931 13:32:25 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:45.191 [2024-07-26 13:32:25.489226] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:32:45.191 [2024-07-26 13:32:25.489280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid891991 ] 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:45.191 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.191 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:45.191 [2024-07-26 13:32:25.621429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.191 [2024-07-26 13:32:25.705290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.191 [2024-07-26 13:32:25.705356] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:45.191 [2024-07-26 13:32:25.705371] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:45.191 [2024-07-26 13:32:25.705382] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:45.450 00:32:45.451 real 0m0.361s 00:32:45.451 user 0m0.200s 00:32:45.451 sys 0m0.159s 00:32:45.451 13:32:25 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:45.451 13:32:25 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:32:45.451 ************************************ 00:32:45.451 END TEST bdev_json_nonenclosed 00:32:45.451 ************************************ 00:32:45.451 13:32:25 blockdev_crypto_sw -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:45.451 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:32:45.451 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:45.451 13:32:25 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:45.451 ************************************ 00:32:45.451 START TEST bdev_json_nonarray 00:32:45.451 ************************************ 00:32:45.451 13:32:25 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:45.451 [2024-07-26 13:32:25.934124] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:32:45.451 [2024-07-26 13:32:25.934185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892015 ] 00:32:45.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.710 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:45.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.710 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:45.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.710 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:45.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.710 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:45.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.710 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:45.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.710 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:45.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.710 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:45.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.710 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:45.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.710 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:45.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.710 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:45.710 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.710 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:45.711 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.711 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:45.711 [2024-07-26 13:32:26.066462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.711 [2024-07-26 13:32:26.149938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.711 [2024-07-26 13:32:26.150008] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:45.711 [2024-07-26 13:32:26.150024] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:45.711 [2024-07-26 13:32:26.150035] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:45.711 00:32:45.711 real 0m0.358s 00:32:45.711 user 0m0.210s 00:32:45.711 sys 0m0.146s 00:32:45.711 13:32:26 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:45.711 13:32:26 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:32:45.711 ************************************ 00:32:45.711 END TEST bdev_json_nonarray 00:32:45.711 ************************************ 00:32:45.971 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@786 -- # [[ crypto_sw == bdev ]] 00:32:45.971 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@793 -- # [[ crypto_sw == gpt ]] 00:32:45.971 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@797 -- # [[ crypto_sw == crypto_sw ]] 00:32:45.971 13:32:26 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:32:45.971 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:45.971 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:45.971 13:32:26 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:45.971 ************************************ 00:32:45.971 START TEST bdev_crypto_enomem 00:32:45.971 ************************************ 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1125 -- # bdev_crypto_enomem 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@634 -- # local base_dev=base0 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local test_dev=crypt0 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local err_dev=EE_base0 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local qd=32 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # ERR_PID=892044 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # waitforlisten 892044 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@831 -- # '[' -z 892044 ']' 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:45.971 13:32:26 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:32:45.971 [2024-07-26 13:32:26.382321] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:32:45.971 [2024-07-26 13:32:26.382376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892044 ] 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:45.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:45.971 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:46.231 [2024-07-26 13:32:26.503243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.231 [2024-07-26 13:32:26.590382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@864 -- # return 0 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@644 -- # rpc_cmd 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:32:46.800 true 00:32:46.800 base0 00:32:46.800 true 00:32:46.800 [2024-07-26 13:32:27.310825] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:32:46.800 crypt0 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@651 -- # waitforbdev crypt0 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local bdev_name=crypt0 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@901 -- # local i 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:46.800 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:32:47.061 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.061 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:32:47.061 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:47.061 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:32:47.061 [ 00:32:47.061 { 00:32:47.061 "name": "crypt0", 00:32:47.061 "aliases": [ 00:32:47.061 "055abee1-828b-5c98-8ebb-3eb8c9d661a3" 00:32:47.061 ], 00:32:47.061 "product_name": "crypto", 00:32:47.061 "block_size": 512, 00:32:47.061 "num_blocks": 2097152, 00:32:47.061 "uuid": "055abee1-828b-5c98-8ebb-3eb8c9d661a3", 00:32:47.061 "assigned_rate_limits": { 00:32:47.061 "rw_ios_per_sec": 0, 00:32:47.061 "rw_mbytes_per_sec": 0, 00:32:47.061 "r_mbytes_per_sec": 0, 00:32:47.061 "w_mbytes_per_sec": 0 00:32:47.061 }, 00:32:47.061 "claimed": false, 00:32:47.061 "zoned": false, 00:32:47.061 "supported_io_types": { 00:32:47.061 "read": true, 00:32:47.061 "write": true, 00:32:47.061 "unmap": false, 00:32:47.061 "flush": false, 00:32:47.061 "reset": true, 00:32:47.061 "nvme_admin": false, 00:32:47.061 "nvme_io": false, 00:32:47.061 "nvme_io_md": false, 00:32:47.061 "write_zeroes": true, 00:32:47.061 "zcopy": false, 00:32:47.061 "get_zone_info": false, 00:32:47.061 "zone_management": false, 00:32:47.061 "zone_append": false, 00:32:47.061 "compare": false, 00:32:47.061 "compare_and_write": false, 00:32:47.061 "abort": false, 00:32:47.061 "seek_hole": false, 00:32:47.061 "seek_data": false, 00:32:47.061 "copy": false, 00:32:47.061 "nvme_iov_md": false 00:32:47.061 }, 00:32:47.061 "memory_domains": [ 00:32:47.061 { 00:32:47.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:47.061 "dma_device_type": 2 00:32:47.061 } 00:32:47.061 ], 00:32:47.062 "driver_specific": { 00:32:47.062 "crypto": { 00:32:47.062 "base_bdev_name": "EE_base0", 00:32:47.062 "name": "crypt0", 00:32:47.062 "key_name": "test_dek_sw" 00:32:47.062 } 00:32:47.062 } 00:32:47.062 } 00:32:47.062 ] 00:32:47.062 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:47.062 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@907 -- # return 0 00:32:47.062 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # rpcpid=892302 00:32:47.062 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@656 -- # sleep 1 00:32:47.062 13:32:27 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:47.062 Running I/O for 5 seconds... 00:32:48.013 13:32:28 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:32:48.013 13:32:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:48.013 13:32:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:32:48.013 13:32:28 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:48.013 13:32:28 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@659 -- # wait 892302 00:32:52.205 00:32:52.205 Latency(us) 00:32:52.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.205 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:32:52.205 crypt0 : 5.00 38974.67 152.24 0.00 0.00 817.54 383.39 1081.34 00:32:52.205 =================================================================================================================== 00:32:52.205 Total : 38974.67 152.24 0.00 0.00 817.54 383.39 1081.34 00:32:52.205 0 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@661 -- # rpc_cmd bdev_crypto_delete crypt0 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@663 -- # killprocess 892044 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@950 -- # '[' -z 892044 ']' 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # kill -0 892044 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # uname 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 892044 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 892044' 00:32:52.205 killing process with pid 892044 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@969 -- # kill 892044 00:32:52.205 Received shutdown signal, test time was about 5.000000 seconds 00:32:52.205 00:32:52.205 Latency(us) 00:32:52.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.205 =================================================================================================================== 00:32:52.205 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@974 -- # wait 892044 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # trap - SIGINT SIGTERM EXIT 00:32:52.205 00:32:52.205 real 0m6.402s 00:32:52.205 user 0m6.637s 00:32:52.205 sys 0m0.364s 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:52.205 13:32:32 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:32:52.205 ************************************ 00:32:52.205 END TEST bdev_crypto_enomem 00:32:52.205 ************************************ 00:32:52.464 13:32:32 blockdev_crypto_sw -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:32:52.464 13:32:32 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # cleanup 00:32:52.464 13:32:32 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:32:52.464 13:32:32 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:52.464 13:32:32 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:32:52.464 13:32:32 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:32:52.464 13:32:32 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:32:52.464 13:32:32 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:32:52.464 00:32:52.464 real 0m53.886s 00:32:52.464 user 1m46.642s 00:32:52.464 sys 0m6.466s 00:32:52.464 13:32:32 blockdev_crypto_sw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:52.464 13:32:32 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:32:52.464 ************************************ 00:32:52.464 END TEST blockdev_crypto_sw 00:32:52.464 ************************************ 00:32:52.464 13:32:32 -- spdk/autotest.sh@363 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:32:52.464 13:32:32 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:52.464 13:32:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:52.464 13:32:32 -- common/autotest_common.sh@10 -- # set +x 00:32:52.464 ************************************ 00:32:52.464 START TEST blockdev_crypto_qat 00:32:52.464 ************************************ 00:32:52.464 13:32:32 blockdev_crypto_qat -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:32:52.464 * Looking for test storage... 00:32:52.464 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # uname -s 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@681 -- # test_type=crypto_qat 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # crypto_device= 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # dek= 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # env_ctx= 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == bdev ]] 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == crypto_* ]] 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=893174 00:32:52.464 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:32:52.465 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:32:52.465 13:32:32 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 893174 00:32:52.465 13:32:32 blockdev_crypto_qat -- common/autotest_common.sh@831 -- # '[' -z 893174 ']' 00:32:52.465 13:32:32 blockdev_crypto_qat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.465 13:32:32 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:52.465 13:32:32 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.465 13:32:32 blockdev_crypto_qat -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:52.465 13:32:32 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:52.724 [2024-07-26 13:32:33.044701] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:32:52.724 [2024-07-26 13:32:33.044765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893174 ] 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.724 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:52.724 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.725 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:52.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.725 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:52.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.725 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:52.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.725 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:52.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.725 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:52.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.725 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:52.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.725 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:52.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.725 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:52.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.725 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:52.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:52.725 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:52.725 [2024-07-26 13:32:33.177377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.982 [2024-07-26 13:32:33.264086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:53.550 13:32:33 blockdev_crypto_qat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:53.550 13:32:33 blockdev_crypto_qat -- common/autotest_common.sh@864 -- # return 0 00:32:53.550 13:32:33 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:32:53.550 13:32:33 blockdev_crypto_qat -- bdev/blockdev.sh@707 -- # setup_crypto_qat_conf 00:32:53.550 13:32:33 blockdev_crypto_qat -- bdev/blockdev.sh@169 -- # rpc_cmd 00:32:53.550 13:32:33 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:53.550 13:32:33 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:53.550 [2024-07-26 13:32:33.954235] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:32:53.550 [2024-07-26 13:32:33.962268] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:53.550 [2024-07-26 13:32:33.970285] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:53.550 [2024-07-26 13:32:34.039149] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:32:56.085 true 00:32:56.086 true 00:32:56.086 true 00:32:56.086 true 00:32:56.086 Malloc0 00:32:56.086 Malloc1 00:32:56.086 Malloc2 00:32:56.086 Malloc3 00:32:56.086 [2024-07-26 13:32:36.358192] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:32:56.086 crypto_ram 00:32:56.086 [2024-07-26 13:32:36.366205] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:32:56.086 crypto_ram1 00:32:56.086 [2024-07-26 13:32:36.374226] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:32:56.086 crypto_ram2 00:32:56.086 [2024-07-26 13:32:36.382245] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:32:56.086 crypto_ram3 00:32:56.086 [ 00:32:56.086 { 00:32:56.086 "name": "Malloc1", 00:32:56.086 "aliases": [ 00:32:56.086 "4d0cb2a4-9894-400b-bd15-b2e0fd1d2437" 00:32:56.086 ], 00:32:56.086 "product_name": "Malloc disk", 00:32:56.086 "block_size": 512, 00:32:56.086 "num_blocks": 65536, 00:32:56.086 "uuid": "4d0cb2a4-9894-400b-bd15-b2e0fd1d2437", 00:32:56.086 "assigned_rate_limits": { 00:32:56.086 "rw_ios_per_sec": 0, 00:32:56.086 "rw_mbytes_per_sec": 0, 00:32:56.086 "r_mbytes_per_sec": 0, 00:32:56.086 "w_mbytes_per_sec": 0 00:32:56.086 }, 00:32:56.086 "claimed": true, 00:32:56.086 "claim_type": "exclusive_write", 00:32:56.086 "zoned": false, 00:32:56.086 "supported_io_types": { 00:32:56.086 "read": true, 00:32:56.086 "write": true, 00:32:56.086 "unmap": true, 00:32:56.086 "flush": true, 00:32:56.086 "reset": true, 00:32:56.086 "nvme_admin": false, 00:32:56.086 "nvme_io": false, 00:32:56.086 "nvme_io_md": false, 00:32:56.086 "write_zeroes": true, 00:32:56.086 "zcopy": true, 00:32:56.086 "get_zone_info": false, 00:32:56.086 "zone_management": false, 00:32:56.086 "zone_append": false, 00:32:56.086 "compare": false, 00:32:56.086 "compare_and_write": false, 00:32:56.086 "abort": true, 00:32:56.086 "seek_hole": false, 00:32:56.086 "seek_data": false, 00:32:56.086 "copy": true, 00:32:56.086 "nvme_iov_md": false 00:32:56.086 }, 00:32:56.086 "memory_domains": [ 00:32:56.086 { 00:32:56.086 "dma_device_id": "system", 00:32:56.086 "dma_device_type": 1 00:32:56.086 }, 00:32:56.086 { 00:32:56.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:56.086 "dma_device_type": 2 00:32:56.086 } 00:32:56.086 ], 00:32:56.086 "driver_specific": {} 00:32:56.086 } 00:32:56.086 ] 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.086 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.086 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # cat 00:32:56.086 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.086 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.086 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.086 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:32:56.086 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:32:56.086 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:56.086 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:56.086 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:32:56.086 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r .name 00:32:56.086 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "940e6004-6747-54d8-a94e-1dddc5208b65"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "940e6004-6747-54d8-a94e-1dddc5208b65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "ec484ab3-6f50-5bdb-a9ae-fc9adf4ebde3"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ec484ab3-6f50-5bdb-a9ae-fc9adf4ebde3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "bab2026d-7b6f-5013-a44e-be82723d71c4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "bab2026d-7b6f-5013-a44e-be82723d71c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "418cd8fd-bb79-5c4f-b5ae-2a8e0ee094c6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "418cd8fd-bb79-5c4f-b5ae-2a8e0ee094c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:32:56.345 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:32:56.345 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:32:56.345 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:32:56.345 13:32:36 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # killprocess 893174 00:32:56.345 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@950 -- # '[' -z 893174 ']' 00:32:56.345 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # kill -0 893174 00:32:56.345 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # uname 00:32:56.345 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:56.345 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 893174 00:32:56.345 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:56.345 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:56.345 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 893174' 00:32:56.345 killing process with pid 893174 00:32:56.345 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@969 -- # kill 893174 00:32:56.345 13:32:36 blockdev_crypto_qat -- common/autotest_common.sh@974 -- # wait 893174 00:32:56.913 13:32:37 blockdev_crypto_qat -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:56.913 13:32:37 blockdev_crypto_qat -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:32:56.913 13:32:37 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:56.913 13:32:37 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:56.913 13:32:37 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:56.913 ************************************ 00:32:56.913 START TEST bdev_hello_world 00:32:56.913 ************************************ 00:32:56.913 13:32:37 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:32:56.913 [2024-07-26 13:32:37.230080] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:32:56.913 [2024-07-26 13:32:37.230133] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid893966 ] 00:32:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.913 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.913 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:56.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.913 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:56.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:56.914 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:56.914 [2024-07-26 13:32:37.361575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.173 [2024-07-26 13:32:37.444421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.173 [2024-07-26 13:32:37.465653] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:32:57.173 [2024-07-26 13:32:37.473680] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:32:57.173 [2024-07-26 13:32:37.481698] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:32:57.173 [2024-07-26 13:32:37.587222] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:32:59.704 [2024-07-26 13:32:39.749963] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:32:59.704 [2024-07-26 13:32:39.750022] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:32:59.704 [2024-07-26 13:32:39.750036] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:59.704 [2024-07-26 13:32:39.757982] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:32:59.704 [2024-07-26 13:32:39.757999] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:32:59.704 [2024-07-26 13:32:39.758010] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:59.704 [2024-07-26 13:32:39.766005] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:32:59.704 [2024-07-26 13:32:39.766024] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:32:59.704 [2024-07-26 13:32:39.766034] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:59.704 [2024-07-26 13:32:39.774026] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:32:59.704 [2024-07-26 13:32:39.774042] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:32:59.704 [2024-07-26 13:32:39.774052] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:32:59.704 [2024-07-26 13:32:39.845391] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:32:59.704 [2024-07-26 13:32:39.845431] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:32:59.704 [2024-07-26 13:32:39.845448] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:32:59.704 [2024-07-26 13:32:39.846621] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:32:59.704 [2024-07-26 13:32:39.846684] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:32:59.704 [2024-07-26 13:32:39.846700] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:32:59.704 [2024-07-26 13:32:39.846739] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:32:59.704 00:32:59.704 [2024-07-26 13:32:39.846755] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:32:59.704 00:32:59.704 real 0m2.987s 00:32:59.704 user 0m2.624s 00:32:59.704 sys 0m0.326s 00:32:59.704 13:32:40 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:59.704 13:32:40 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:32:59.704 ************************************ 00:32:59.704 END TEST bdev_hello_world 00:32:59.704 ************************************ 00:32:59.704 13:32:40 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:32:59.704 13:32:40 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:32:59.704 13:32:40 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:59.704 13:32:40 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:32:59.963 ************************************ 00:32:59.963 START TEST bdev_bounds 00:32:59.963 ************************************ 00:32:59.963 13:32:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:32:59.963 13:32:40 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=894508 00:32:59.963 13:32:40 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:32:59.963 13:32:40 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:32:59.963 13:32:40 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 894508' 00:32:59.963 Process bdevio pid: 894508 00:32:59.963 13:32:40 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 894508 00:32:59.963 13:32:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 894508 ']' 00:32:59.963 13:32:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.963 13:32:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:59.963 13:32:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.963 13:32:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:59.963 13:32:40 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:32:59.963 [2024-07-26 13:32:40.294449] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:32:59.963 [2024-07-26 13:32:40.294504] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894508 ] 00:32:59.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.963 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:59.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.963 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:59.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.963 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:59.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.963 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:59.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.963 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:59.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.963 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:59.963 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.963 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:59.964 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:59.964 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:59.964 [2024-07-26 13:32:40.427336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:00.223 [2024-07-26 13:32:40.518049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.223 [2024-07-26 13:32:40.518149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.223 [2024-07-26 13:32:40.518150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:00.223 [2024-07-26 13:32:40.539449] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:00.223 [2024-07-26 13:32:40.547459] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:00.223 [2024-07-26 13:32:40.555487] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:00.223 [2024-07-26 13:32:40.665661] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:02.757 [2024-07-26 13:32:42.818897] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:02.757 [2024-07-26 13:32:42.818977] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:02.757 [2024-07-26 13:32:42.818990] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:02.757 [2024-07-26 13:32:42.826917] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:02.757 [2024-07-26 13:32:42.826936] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:02.757 [2024-07-26 13:32:42.826947] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:02.757 [2024-07-26 13:32:42.834939] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:02.757 [2024-07-26 13:32:42.834955] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:02.757 [2024-07-26 13:32:42.834966] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:02.757 [2024-07-26 13:32:42.842961] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:02.757 [2024-07-26 13:32:42.842978] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:02.757 [2024-07-26 13:32:42.842988] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:02.757 13:32:42 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:02.757 13:32:42 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:33:02.757 13:32:42 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:33:02.757 I/O targets: 00:33:02.757 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:33:02.757 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:33:02.757 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:33:02.757 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:33:02.757 00:33:02.757 00:33:02.757 CUnit - A unit testing framework for C - Version 2.1-3 00:33:02.757 http://cunit.sourceforge.net/ 00:33:02.757 00:33:02.757 00:33:02.758 Suite: bdevio tests on: crypto_ram3 00:33:02.758 Test: blockdev write read block ...passed 00:33:02.758 Test: blockdev write zeroes read block ...passed 00:33:02.758 Test: blockdev write zeroes read no split ...passed 00:33:02.758 Test: blockdev write zeroes read split ...passed 00:33:02.758 Test: blockdev write zeroes read split partial ...passed 00:33:02.758 Test: blockdev reset ...passed 00:33:02.758 Test: blockdev write read 8 blocks ...passed 00:33:02.758 Test: blockdev write read size > 128k ...passed 00:33:02.758 Test: blockdev write read invalid size ...passed 00:33:02.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:02.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:02.758 Test: blockdev write read max offset ...passed 00:33:02.758 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:02.758 Test: blockdev writev readv 8 blocks ...passed 00:33:02.758 Test: blockdev writev readv 30 x 1block ...passed 00:33:02.758 Test: blockdev writev readv block ...passed 00:33:02.758 Test: blockdev writev readv size > 128k ...passed 00:33:02.758 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:02.758 Test: blockdev comparev and writev ...passed 00:33:02.758 Test: blockdev nvme passthru rw ...passed 00:33:02.758 Test: blockdev nvme passthru vendor specific ...passed 00:33:02.758 Test: blockdev nvme admin passthru ...passed 00:33:02.758 Test: blockdev copy ...passed 00:33:02.758 Suite: bdevio tests on: crypto_ram2 00:33:02.758 Test: blockdev write read block ...passed 00:33:02.758 Test: blockdev write zeroes read block ...passed 00:33:02.758 Test: blockdev write zeroes read no split ...passed 00:33:02.758 Test: blockdev write zeroes read split ...passed 00:33:02.758 Test: blockdev write zeroes read split partial ...passed 00:33:02.758 Test: blockdev reset ...passed 00:33:02.758 Test: blockdev write read 8 blocks ...passed 00:33:02.758 Test: blockdev write read size > 128k ...passed 00:33:02.758 Test: blockdev write read invalid size ...passed 00:33:02.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:02.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:02.758 Test: blockdev write read max offset ...passed 00:33:02.758 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:02.758 Test: blockdev writev readv 8 blocks ...passed 00:33:02.758 Test: blockdev writev readv 30 x 1block ...passed 00:33:02.758 Test: blockdev writev readv block ...passed 00:33:02.758 Test: blockdev writev readv size > 128k ...passed 00:33:02.758 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:02.758 Test: blockdev comparev and writev ...passed 00:33:02.758 Test: blockdev nvme passthru rw ...passed 00:33:02.758 Test: blockdev nvme passthru vendor specific ...passed 00:33:02.758 Test: blockdev nvme admin passthru ...passed 00:33:02.758 Test: blockdev copy ...passed 00:33:02.758 Suite: bdevio tests on: crypto_ram1 00:33:02.758 Test: blockdev write read block ...passed 00:33:02.758 Test: blockdev write zeroes read block ...passed 00:33:02.758 Test: blockdev write zeroes read no split ...passed 00:33:02.758 Test: blockdev write zeroes read split ...passed 00:33:02.758 Test: blockdev write zeroes read split partial ...passed 00:33:02.758 Test: blockdev reset ...passed 00:33:02.758 Test: blockdev write read 8 blocks ...passed 00:33:02.758 Test: blockdev write read size > 128k ...passed 00:33:02.758 Test: blockdev write read invalid size ...passed 00:33:02.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:02.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:02.758 Test: blockdev write read max offset ...passed 00:33:02.758 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:02.758 Test: blockdev writev readv 8 blocks ...passed 00:33:02.758 Test: blockdev writev readv 30 x 1block ...passed 00:33:02.758 Test: blockdev writev readv block ...passed 00:33:02.758 Test: blockdev writev readv size > 128k ...passed 00:33:02.758 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:02.758 Test: blockdev comparev and writev ...passed 00:33:02.758 Test: blockdev nvme passthru rw ...passed 00:33:02.758 Test: blockdev nvme passthru vendor specific ...passed 00:33:02.758 Test: blockdev nvme admin passthru ...passed 00:33:02.758 Test: blockdev copy ...passed 00:33:02.758 Suite: bdevio tests on: crypto_ram 00:33:02.758 Test: blockdev write read block ...passed 00:33:02.758 Test: blockdev write zeroes read block ...passed 00:33:02.758 Test: blockdev write zeroes read no split ...passed 00:33:02.758 Test: blockdev write zeroes read split ...passed 00:33:03.016 Test: blockdev write zeroes read split partial ...passed 00:33:03.016 Test: blockdev reset ...passed 00:33:03.016 Test: blockdev write read 8 blocks ...passed 00:33:03.016 Test: blockdev write read size > 128k ...passed 00:33:03.016 Test: blockdev write read invalid size ...passed 00:33:03.016 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:03.016 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:03.016 Test: blockdev write read max offset ...passed 00:33:03.016 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:03.016 Test: blockdev writev readv 8 blocks ...passed 00:33:03.016 Test: blockdev writev readv 30 x 1block ...passed 00:33:03.016 Test: blockdev writev readv block ...passed 00:33:03.016 Test: blockdev writev readv size > 128k ...passed 00:33:03.016 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:03.016 Test: blockdev comparev and writev ...passed 00:33:03.016 Test: blockdev nvme passthru rw ...passed 00:33:03.016 Test: blockdev nvme passthru vendor specific ...passed 00:33:03.016 Test: blockdev nvme admin passthru ...passed 00:33:03.016 Test: blockdev copy ...passed 00:33:03.016 00:33:03.016 Run Summary: Type Total Ran Passed Failed Inactive 00:33:03.016 suites 4 4 n/a 0 0 00:33:03.016 tests 92 92 92 0 0 00:33:03.016 asserts 520 520 520 0 n/a 00:33:03.016 00:33:03.016 Elapsed time = 0.497 seconds 00:33:03.016 0 00:33:03.016 13:32:43 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 894508 00:33:03.016 13:32:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 894508 ']' 00:33:03.016 13:32:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 894508 00:33:03.016 13:32:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:33:03.016 13:32:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:03.016 13:32:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 894508 00:33:03.016 13:32:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:03.016 13:32:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:03.016 13:32:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 894508' 00:33:03.016 killing process with pid 894508 00:33:03.016 13:32:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@969 -- # kill 894508 00:33:03.016 13:32:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@974 -- # wait 894508 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:33:03.275 00:33:03.275 real 0m3.463s 00:33:03.275 user 0m9.690s 00:33:03.275 sys 0m0.531s 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:33:03.275 ************************************ 00:33:03.275 END TEST bdev_bounds 00:33:03.275 ************************************ 00:33:03.275 13:32:43 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:33:03.275 13:32:43 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:03.275 13:32:43 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:03.275 13:32:43 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:03.275 ************************************ 00:33:03.275 START TEST bdev_nbd 00:33:03.275 ************************************ 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:33:03.275 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=895073 00:33:03.276 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:33:03.276 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:33:03.276 13:32:43 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 895073 /var/tmp/spdk-nbd.sock 00:33:03.276 13:32:43 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 895073 ']' 00:33:03.276 13:32:43 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:33:03.276 13:32:43 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:03.276 13:32:43 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:33:03.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:33:03.276 13:32:43 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:03.536 13:32:43 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:03.536 [2024-07-26 13:32:43.854734] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:33:03.536 [2024-07-26 13:32:43.854789] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:03.536 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:03.536 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:03.536 [2024-07-26 13:32:43.987494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.795 [2024-07-26 13:32:44.073776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.795 [2024-07-26 13:32:44.095011] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:03.795 [2024-07-26 13:32:44.103034] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:03.795 [2024-07-26 13:32:44.111052] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:03.795 [2024-07-26 13:32:44.212688] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:06.330 [2024-07-26 13:32:46.376104] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:06.330 [2024-07-26 13:32:46.376173] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:06.330 [2024-07-26 13:32:46.376187] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:06.330 [2024-07-26 13:32:46.384123] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:06.330 [2024-07-26 13:32:46.384149] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:06.330 [2024-07-26 13:32:46.384160] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:06.330 [2024-07-26 13:32:46.392150] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:06.330 [2024-07-26 13:32:46.392166] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:06.330 [2024-07-26 13:32:46.392176] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:06.330 [2024-07-26 13:32:46.400172] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:06.330 [2024-07-26 13:32:46.400188] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:06.330 [2024-07-26 13:32:46.400198] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:06.330 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:06.330 1+0 records in 00:33:06.330 1+0 records out 00:33:06.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286269 s, 14.3 MB/s 00:33:06.331 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:06.331 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:06.331 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:06.331 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:06.331 13:32:46 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:06.331 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:06.331 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:06.331 13:32:46 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:06.631 1+0 records in 00:33:06.631 1+0 records out 00:33:06.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280317 s, 14.6 MB/s 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:06.631 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:06.890 1+0 records in 00:33:06.890 1+0 records out 00:33:06.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274755 s, 14.9 MB/s 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:06.890 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:07.149 1+0 records in 00:33:07.149 1+0 records out 00:33:07.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343067 s, 11.9 MB/s 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:33:07.149 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:07.408 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:33:07.408 { 00:33:07.409 "nbd_device": "/dev/nbd0", 00:33:07.409 "bdev_name": "crypto_ram" 00:33:07.409 }, 00:33:07.409 { 00:33:07.409 "nbd_device": "/dev/nbd1", 00:33:07.409 "bdev_name": "crypto_ram1" 00:33:07.409 }, 00:33:07.409 { 00:33:07.409 "nbd_device": "/dev/nbd2", 00:33:07.409 "bdev_name": "crypto_ram2" 00:33:07.409 }, 00:33:07.409 { 00:33:07.409 "nbd_device": "/dev/nbd3", 00:33:07.409 "bdev_name": "crypto_ram3" 00:33:07.409 } 00:33:07.409 ]' 00:33:07.409 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:33:07.409 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:33:07.409 { 00:33:07.409 "nbd_device": "/dev/nbd0", 00:33:07.409 "bdev_name": "crypto_ram" 00:33:07.409 }, 00:33:07.409 { 00:33:07.409 "nbd_device": "/dev/nbd1", 00:33:07.409 "bdev_name": "crypto_ram1" 00:33:07.409 }, 00:33:07.409 { 00:33:07.409 "nbd_device": "/dev/nbd2", 00:33:07.409 "bdev_name": "crypto_ram2" 00:33:07.409 }, 00:33:07.409 { 00:33:07.409 "nbd_device": "/dev/nbd3", 00:33:07.409 "bdev_name": "crypto_ram3" 00:33:07.409 } 00:33:07.409 ]' 00:33:07.409 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:33:07.409 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:33:07.409 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:07.409 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:33:07.409 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:07.409 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:07.409 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:07.409 13:32:47 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:07.668 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:07.668 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:07.668 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:07.668 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:07.668 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:07.668 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:07.668 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:07.668 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:07.668 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:07.668 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:07.927 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:07.927 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:07.927 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:07.927 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:07.927 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:07.927 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:07.927 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:07.927 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:07.927 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:07.927 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:33:08.186 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:33:08.186 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:33:08.186 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:33:08.186 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:08.186 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:08.186 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:33:08.186 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:08.186 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:08.186 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:08.186 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:33:08.443 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:33:08.443 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:33:08.443 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:33:08.443 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:08.443 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:08.443 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:33:08.443 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:08.443 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:08.443 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:08.443 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:08.443 13:32:48 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:08.702 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:33:08.960 /dev/nbd0 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:08.960 1+0 records in 00:33:08.960 1+0 records out 00:33:08.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322043 s, 12.7 MB/s 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:08.960 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:33:09.218 /dev/nbd1 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:09.218 1+0 records in 00:33:09.218 1+0 records out 00:33:09.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023442 s, 17.5 MB/s 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:09.218 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:33:09.477 /dev/nbd10 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:09.477 1+0 records in 00:33:09.477 1+0 records out 00:33:09.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326842 s, 12.5 MB/s 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:09.477 13:32:49 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:33:09.737 /dev/nbd11 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:09.737 1+0 records in 00:33:09.737 1+0 records out 00:33:09.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324724 s, 12.6 MB/s 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:09.737 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:33:09.996 { 00:33:09.996 "nbd_device": "/dev/nbd0", 00:33:09.996 "bdev_name": "crypto_ram" 00:33:09.996 }, 00:33:09.996 { 00:33:09.996 "nbd_device": "/dev/nbd1", 00:33:09.996 "bdev_name": "crypto_ram1" 00:33:09.996 }, 00:33:09.996 { 00:33:09.996 "nbd_device": "/dev/nbd10", 00:33:09.996 "bdev_name": "crypto_ram2" 00:33:09.996 }, 00:33:09.996 { 00:33:09.996 "nbd_device": "/dev/nbd11", 00:33:09.996 "bdev_name": "crypto_ram3" 00:33:09.996 } 00:33:09.996 ]' 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:33:09.996 { 00:33:09.996 "nbd_device": "/dev/nbd0", 00:33:09.996 "bdev_name": "crypto_ram" 00:33:09.996 }, 00:33:09.996 { 00:33:09.996 "nbd_device": "/dev/nbd1", 00:33:09.996 "bdev_name": "crypto_ram1" 00:33:09.996 }, 00:33:09.996 { 00:33:09.996 "nbd_device": "/dev/nbd10", 00:33:09.996 "bdev_name": "crypto_ram2" 00:33:09.996 }, 00:33:09.996 { 00:33:09.996 "nbd_device": "/dev/nbd11", 00:33:09.996 "bdev_name": "crypto_ram3" 00:33:09.996 } 00:33:09.996 ]' 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:33:09.996 /dev/nbd1 00:33:09.996 /dev/nbd10 00:33:09.996 /dev/nbd11' 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:33:09.996 /dev/nbd1 00:33:09.996 /dev/nbd10 00:33:09.996 /dev/nbd11' 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:33:09.996 256+0 records in 00:33:09.996 256+0 records out 00:33:09.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101489 s, 103 MB/s 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:09.996 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:33:10.256 256+0 records in 00:33:10.256 256+0 records out 00:33:10.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0577574 s, 18.2 MB/s 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:33:10.256 256+0 records in 00:33:10.256 256+0 records out 00:33:10.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.040292 s, 26.0 MB/s 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:33:10.256 256+0 records in 00:33:10.256 256+0 records out 00:33:10.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0336591 s, 31.2 MB/s 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:33:10.256 256+0 records in 00:33:10.256 256+0 records out 00:33:10.256 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0496597 s, 21.1 MB/s 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:10.256 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:10.515 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:10.515 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:10.515 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:10.515 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:10.515 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:10.515 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:10.515 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:10.515 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:10.515 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:10.515 13:32:50 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:33:10.774 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:10.774 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:10.774 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:10.774 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:10.774 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:10.774 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:10.774 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:10.774 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:10.774 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:10.774 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:33:11.032 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:33:11.032 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:33:11.032 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:33:11.032 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:11.032 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:11.032 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:33:11.032 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:11.032 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:11.032 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:11.032 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:33:11.291 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:33:11.291 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:33:11.291 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:33:11.291 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:11.291 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:11.291 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:33:11.291 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:11.291 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:11.291 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:33:11.291 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:11.291 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:33:11.550 13:32:51 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:33:11.809 malloc_lvol_verify 00:33:11.809 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:33:11.809 936117e6-b2ed-456f-82bc-c26038f01f2b 00:33:11.809 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:33:12.067 daef854e-17c5-4660-89ab-44a5c4c77518 00:33:12.067 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:33:12.326 /dev/nbd0 00:33:12.326 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:33:12.326 mke2fs 1.46.5 (30-Dec-2021) 00:33:12.326 Discarding device blocks: 0/4096 done 00:33:12.326 Creating filesystem with 4096 1k blocks and 1024 inodes 00:33:12.326 00:33:12.326 Allocating group tables: 0/1 done 00:33:12.326 Writing inode tables: 0/1 done 00:33:12.326 Creating journal (1024 blocks): done 00:33:12.326 Writing superblocks and filesystem accounting information: 0/1 done 00:33:12.326 00:33:12.326 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:33:12.326 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:33:12.326 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:33:12.326 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:12.326 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:12.326 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:33:12.326 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:12.326 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 895073 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 895073 ']' 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 895073 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 895073 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 895073' 00:33:12.585 killing process with pid 895073 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@969 -- # kill 895073 00:33:12.585 13:32:52 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@974 -- # wait 895073 00:33:12.851 13:32:53 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:33:12.851 00:33:12.851 real 0m9.528s 00:33:12.851 user 0m12.354s 00:33:12.851 sys 0m3.811s 00:33:12.851 13:32:53 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:12.851 13:32:53 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:33:12.851 ************************************ 00:33:12.851 END TEST bdev_nbd 00:33:12.851 ************************************ 00:33:12.851 13:32:53 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:33:12.851 13:32:53 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = nvme ']' 00:33:12.851 13:32:53 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = gpt ']' 00:33:12.851 13:32:53 blockdev_crypto_qat -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:33:12.851 13:32:53 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:12.851 13:32:53 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:12.851 13:32:53 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:13.110 ************************************ 00:33:13.110 START TEST bdev_fio 00:33:13.110 ************************************ 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:33:13.110 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram1]' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram1 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:13.110 ************************************ 00:33:13.110 START TEST bdev_fio_rw_verify 00:33:13.110 ************************************ 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:13.110 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:13.111 13:32:53 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:13.696 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:13.696 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:13.696 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:13.696 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:13.696 fio-3.35 00:33:13.696 Starting 4 threads 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:13.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:13.696 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:28.574 00:33:28.574 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=897508: Fri Jul 26 13:33:06 2024 00:33:28.574 read: IOPS=21.3k, BW=83.1MiB/s (87.2MB/s)(831MiB/10001msec) 00:33:28.574 slat (usec): min=16, max=1286, avg=64.64, stdev=34.56 00:33:28.574 clat (usec): min=20, max=2076, avg=362.94, stdev=222.45 00:33:28.574 lat (usec): min=62, max=2208, avg=427.58, stdev=239.16 00:33:28.574 clat percentiles (usec): 00:33:28.574 | 50.000th=[ 302], 99.000th=[ 1037], 99.900th=[ 1270], 99.990th=[ 1500], 00:33:28.574 | 99.999th=[ 1844] 00:33:28.574 write: IOPS=23.3k, BW=91.0MiB/s (95.4MB/s)(888MiB/9757msec); 0 zone resets 00:33:28.574 slat (usec): min=21, max=417, avg=76.49, stdev=33.71 00:33:28.574 clat (usec): min=21, max=1504, avg=403.56, stdev=233.49 00:33:28.574 lat (usec): min=48, max=1670, avg=480.05, stdev=249.25 00:33:28.574 clat percentiles (usec): 00:33:28.574 | 50.000th=[ 355], 99.000th=[ 1106], 99.900th=[ 1303], 99.990th=[ 1434], 00:33:28.574 | 99.999th=[ 1483] 00:33:28.574 bw ( KiB/s): min=72656, max=138872, per=98.43%, avg=91704.84, stdev=3760.95, samples=76 00:33:28.574 iops : min=18164, max=34718, avg=22926.11, stdev=940.26, samples=76 00:33:28.574 lat (usec) : 50=0.01%, 100=3.00%, 250=31.40%, 500=39.61%, 750=17.79% 00:33:28.574 lat (usec) : 1000=6.29% 00:33:28.574 lat (msec) : 2=1.89%, 4=0.01% 00:33:28.574 cpu : usr=99.62%, sys=0.00%, ctx=108, majf=0, minf=293 00:33:28.574 IO depths : 1=4.5%, 2=27.3%, 4=54.6%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:28.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:28.574 complete : 0=0.0%, 4=88.0%, 8=12.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:28.574 issued rwts: total=212802,227251,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:28.574 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:28.574 00:33:28.574 Run status group 0 (all jobs): 00:33:28.574 READ: bw=83.1MiB/s (87.2MB/s), 83.1MiB/s-83.1MiB/s (87.2MB/s-87.2MB/s), io=831MiB (872MB), run=10001-10001msec 00:33:28.574 WRITE: bw=91.0MiB/s (95.4MB/s), 91.0MiB/s-91.0MiB/s (95.4MB/s-95.4MB/s), io=888MiB (931MB), run=9757-9757msec 00:33:28.574 00:33:28.574 real 0m13.478s 00:33:28.574 user 0m52.666s 00:33:28.574 sys 0m0.507s 00:33:28.574 13:33:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:28.574 13:33:06 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:33:28.574 ************************************ 00:33:28.574 END TEST bdev_fio_rw_verify 00:33:28.574 ************************************ 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:33:28.574 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "940e6004-6747-54d8-a94e-1dddc5208b65"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "940e6004-6747-54d8-a94e-1dddc5208b65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "ec484ab3-6f50-5bdb-a9ae-fc9adf4ebde3"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ec484ab3-6f50-5bdb-a9ae-fc9adf4ebde3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "bab2026d-7b6f-5013-a44e-be82723d71c4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "bab2026d-7b6f-5013-a44e-be82723d71c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "418cd8fd-bb79-5c4f-b5ae-2a8e0ee094c6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "418cd8fd-bb79-5c4f-b5ae-2a8e0ee094c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:33:28.575 crypto_ram1 00:33:28.575 crypto_ram2 00:33:28.575 crypto_ram3 ]] 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "940e6004-6747-54d8-a94e-1dddc5208b65"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "940e6004-6747-54d8-a94e-1dddc5208b65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "ec484ab3-6f50-5bdb-a9ae-fc9adf4ebde3"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ec484ab3-6f50-5bdb-a9ae-fc9adf4ebde3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "bab2026d-7b6f-5013-a44e-be82723d71c4"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "bab2026d-7b6f-5013-a44e-be82723d71c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "418cd8fd-bb79-5c4f-b5ae-2a8e0ee094c6"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "418cd8fd-bb79-5c4f-b5ae-2a8e0ee094c6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram1]' 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram1 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:28.575 ************************************ 00:33:28.575 START TEST bdev_fio_trim 00:33:28.575 ************************************ 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:28.575 13:33:07 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:33:28.575 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:28.575 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:28.575 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:28.575 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:33:28.575 fio-3.35 00:33:28.575 Starting 4 threads 00:33:28.575 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:28.576 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:28.576 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:40.844 00:33:40.844 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=900315: Fri Jul 26 13:33:20 2024 00:33:40.844 write: IOPS=36.6k, BW=143MiB/s (150MB/s)(1429MiB/10001msec); 0 zone resets 00:33:40.844 slat (usec): min=16, max=491, avg=64.40, stdev=34.93 00:33:40.844 clat (usec): min=39, max=1229, avg=230.65, stdev=129.29 00:33:40.844 lat (usec): min=65, max=1413, avg=295.05, stdev=147.24 00:33:40.844 clat percentiles (usec): 00:33:40.844 | 50.000th=[ 202], 99.000th=[ 627], 99.900th=[ 725], 99.990th=[ 807], 00:33:40.844 | 99.999th=[ 1156] 00:33:40.844 bw ( KiB/s): min=144160, max=170688, per=100.00%, avg=146542.74, stdev=1786.32, samples=76 00:33:40.844 iops : min=36040, max=42672, avg=36635.79, stdev=446.58, samples=76 00:33:40.844 trim: IOPS=36.6k, BW=143MiB/s (150MB/s)(1429MiB/10001msec); 0 zone resets 00:33:40.844 slat (usec): min=5, max=287, avg=16.67, stdev= 6.55 00:33:40.844 clat (usec): min=65, max=1414, avg=295.27, stdev=147.27 00:33:40.844 lat (usec): min=74, max=1442, avg=311.93, stdev=149.95 00:33:40.844 clat percentiles (usec): 00:33:40.844 | 50.000th=[ 260], 99.000th=[ 734], 99.900th=[ 857], 99.990th=[ 955], 00:33:40.844 | 99.999th=[ 1336] 00:33:40.844 bw ( KiB/s): min=144160, max=170688, per=100.00%, avg=146543.16, stdev=1786.33, samples=76 00:33:40.844 iops : min=36040, max=42672, avg=36635.79, stdev=446.58, samples=76 00:33:40.844 lat (usec) : 50=0.90%, 100=7.96%, 250=46.71%, 500=37.77%, 750=6.26% 00:33:40.844 lat (usec) : 1000=0.40% 00:33:40.844 lat (msec) : 2=0.01% 00:33:40.844 cpu : usr=99.50%, sys=0.00%, ctx=56, majf=0, minf=94 00:33:40.844 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.844 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.844 issued rwts: total=0,365865,365866,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.844 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:40.844 00:33:40.844 Run status group 0 (all jobs): 00:33:40.844 WRITE: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=1429MiB (1499MB), run=10001-10001msec 00:33:40.844 TRIM: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=1429MiB (1499MB), run=10001-10001msec 00:33:40.844 00:33:40.844 real 0m13.447s 00:33:40.844 user 0m53.923s 00:33:40.844 sys 0m0.482s 00:33:40.844 13:33:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:40.844 13:33:20 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:33:40.844 ************************************ 00:33:40.844 END TEST bdev_fio_trim 00:33:40.844 ************************************ 00:33:40.844 13:33:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:33:40.844 13:33:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:33:40.844 13:33:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:33:40.844 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:33:40.844 13:33:20 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:33:40.844 00:33:40.844 real 0m27.279s 00:33:40.844 user 1m46.775s 00:33:40.844 sys 0m1.182s 00:33:40.844 13:33:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:40.844 13:33:20 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:33:40.844 ************************************ 00:33:40.844 END TEST bdev_fio 00:33:40.844 ************************************ 00:33:40.844 13:33:20 blockdev_crypto_qat -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:40.844 13:33:20 blockdev_crypto_qat -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:40.844 13:33:20 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:33:40.844 13:33:20 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:40.844 13:33:20 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:40.844 ************************************ 00:33:40.844 START TEST bdev_verify 00:33:40.844 ************************************ 00:33:40.844 13:33:20 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:33:40.844 [2024-07-26 13:33:20.826347] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:33:40.844 [2024-07-26 13:33:20.826402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid902109 ] 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:40.845 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:40.845 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:40.845 [2024-07-26 13:33:20.958623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:40.845 [2024-07-26 13:33:21.042059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:40.845 [2024-07-26 13:33:21.042063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.845 [2024-07-26 13:33:21.063397] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:40.845 [2024-07-26 13:33:21.071428] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:40.845 [2024-07-26 13:33:21.079448] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:40.845 [2024-07-26 13:33:21.175603] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:43.378 [2024-07-26 13:33:23.337838] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:43.378 [2024-07-26 13:33:23.337917] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:43.378 [2024-07-26 13:33:23.337930] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:43.378 [2024-07-26 13:33:23.345856] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:43.378 [2024-07-26 13:33:23.345875] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:43.378 [2024-07-26 13:33:23.345886] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:43.378 [2024-07-26 13:33:23.353880] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:43.379 [2024-07-26 13:33:23.353896] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:43.379 [2024-07-26 13:33:23.353906] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:43.379 [2024-07-26 13:33:23.361902] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:43.379 [2024-07-26 13:33:23.361921] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:43.379 [2024-07-26 13:33:23.361931] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:43.379 Running I/O for 5 seconds... 00:33:48.641 00:33:48.641 Latency(us) 00:33:48.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.641 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:48.641 Verification LBA range: start 0x0 length 0x1000 00:33:48.641 crypto_ram : 5.05 514.40 2.01 0.00 0.00 247944.80 2057.83 163577.86 00:33:48.641 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:48.641 Verification LBA range: start 0x1000 length 0x1000 00:33:48.641 crypto_ram : 5.06 515.42 2.01 0.00 0.00 247292.64 2188.90 163577.86 00:33:48.641 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:48.641 Verification LBA range: start 0x0 length 0x1000 00:33:48.641 crypto_ram1 : 5.06 515.85 2.02 0.00 0.00 246715.87 2084.04 152672.67 00:33:48.641 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:48.641 Verification LBA range: start 0x1000 length 0x1000 00:33:48.641 crypto_ram1 : 5.06 518.41 2.03 0.00 0.00 245514.18 2411.72 151833.80 00:33:48.641 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:48.641 Verification LBA range: start 0x0 length 0x1000 00:33:48.641 crypto_ram2 : 5.04 4034.85 15.76 0.00 0.00 31488.91 3722.44 27682.41 00:33:48.641 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:48.641 Verification LBA range: start 0x1000 length 0x1000 00:33:48.641 crypto_ram2 : 5.04 4036.65 15.77 0.00 0.00 31467.03 6973.03 27682.41 00:33:48.641 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:48.641 Verification LBA range: start 0x0 length 0x1000 00:33:48.641 crypto_ram3 : 5.05 4033.69 15.76 0.00 0.00 31401.26 3407.87 27892.12 00:33:48.641 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:33:48.641 Verification LBA range: start 0x1000 length 0x1000 00:33:48.641 crypto_ram3 : 5.05 4054.20 15.84 0.00 0.00 31242.29 1703.94 27682.41 00:33:48.641 =================================================================================================================== 00:33:48.641 Total : 18223.47 71.19 0.00 0.00 55856.52 1703.94 163577.86 00:33:48.641 00:33:48.641 real 0m8.085s 00:33:48.641 user 0m15.387s 00:33:48.641 sys 0m0.336s 00:33:48.641 13:33:28 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:48.641 13:33:28 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:33:48.641 ************************************ 00:33:48.641 END TEST bdev_verify 00:33:48.641 ************************************ 00:33:48.641 13:33:28 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:48.641 13:33:28 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:33:48.641 13:33:28 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:48.641 13:33:28 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:48.641 ************************************ 00:33:48.641 START TEST bdev_verify_big_io 00:33:48.641 ************************************ 00:33:48.641 13:33:28 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:33:48.641 [2024-07-26 13:33:29.000509] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:33:48.641 [2024-07-26 13:33:29.000567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid903445 ] 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:48.641 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.641 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:48.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.642 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:48.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.642 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:48.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.642 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:48.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.642 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:48.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.642 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:48.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.642 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:48.642 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:48.642 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:48.642 [2024-07-26 13:33:29.132462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:48.900 [2024-07-26 13:33:29.217046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.900 [2024-07-26 13:33:29.217051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.900 [2024-07-26 13:33:29.238385] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:48.900 [2024-07-26 13:33:29.246414] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:48.900 [2024-07-26 13:33:29.254434] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:48.900 [2024-07-26 13:33:29.351999] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:33:51.428 [2024-07-26 13:33:31.510701] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:33:51.428 [2024-07-26 13:33:31.510781] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:33:51.428 [2024-07-26 13:33:31.510795] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:51.428 [2024-07-26 13:33:31.518717] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:33:51.428 [2024-07-26 13:33:31.518736] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:33:51.428 [2024-07-26 13:33:31.518746] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:51.428 [2024-07-26 13:33:31.526738] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:33:51.428 [2024-07-26 13:33:31.526755] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:33:51.428 [2024-07-26 13:33:31.526765] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:51.428 [2024-07-26 13:33:31.534760] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:33:51.428 [2024-07-26 13:33:31.534777] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:33:51.428 [2024-07-26 13:33:31.534787] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:33:51.428 Running I/O for 5 seconds... 00:33:51.996 [2024-07-26 13:33:32.373771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.374183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.374259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.374304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.374344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.374383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.374790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.374808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.378078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.378123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.378168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.378210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.378664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.378706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.378746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.378791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.379187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.379206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.382304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.382349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.382389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.382428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.382864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.382906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.382945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.382990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.383382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.383400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.386613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.386657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.386696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.386735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.387150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.387193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.387233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.387272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.387603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.387620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.390864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.390908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.390962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.391005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.391522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.391565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.391618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.391673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.392031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.392048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.395066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.395111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.395155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.395198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.395618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.395660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.395712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.395757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.996 [2024-07-26 13:33:32.396195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.396213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.399309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.399374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.399414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.399455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.399903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.399944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.399984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.400025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.400436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.400454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.403460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.403505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.403543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.403581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.404025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.404067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.404106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.404150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.404553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.404570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.407509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.407554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.407593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.407635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.408075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.408117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.408174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.408214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.408600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.408617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.411643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.411688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.411726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.411765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.412213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.412258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.412296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.412334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.412756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.412773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.415902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.415946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.415984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.416026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.416490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.416532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.416571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.416609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.417004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.417022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.419993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.420036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.420074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.420112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.420546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.420588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.420626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.420668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.421086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.421102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.423949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.423993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.424031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.424073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.424519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.424561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.424600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.424638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.424961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.424978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.427931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.427978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.428017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.428055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.428471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.428525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.428589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.428638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.429025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.429041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.432059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.432102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.432144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.432192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.432613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.432656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.432706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.432746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.433154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.997 [2024-07-26 13:33:32.433175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.436368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.436423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.436461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.436499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.436861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.436903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.436942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.436979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.437408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.437426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.440327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.440375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.440431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.440486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.440878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.440919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.440957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.440995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.441398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.441417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.444190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.444233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.444286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.444323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.444784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.444829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.444868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.444907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.445317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.445338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.448230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.448274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.448313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.448351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.448779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.448821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.448860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.448898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.449253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.449270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.452038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.452083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.452121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.452164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.452569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.452611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.452649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.452687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.453094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.453111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.455862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.455905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.455946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.455988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.456479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.456521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.456561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.456599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.456998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.457016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.459827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.459872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.459914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.459952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.460389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.460432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.460471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.460509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.460822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.460839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.463557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.463601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.463640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.463680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.464114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.464172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.464211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.464260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.464611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.464628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.467530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.467577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.467616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.467658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.468069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.468113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.468168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.468210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.468597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.468613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.998 [2024-07-26 13:33:32.471644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.471695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.471734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.471775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.472169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.472212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.472250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.472304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.472758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.472774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.475462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.475517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.475555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.475593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.475984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.476025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.476063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.476101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.476505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.476522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.479194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.479237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.479275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.479312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.479764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.479805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.479844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.479881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.480267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.480286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.483007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.483054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.483093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.483131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.483569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.483611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.483663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.483701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.484146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.484163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.486874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.486917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.486954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.486993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.487401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.487444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.487482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.487520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.487928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.487945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.489653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.489700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.489738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.489776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.490054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.490093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.490131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.490179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.490424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.490440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.492534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.492580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.492622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.492660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.493097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.493143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.493185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.493223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.493619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.493636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.495261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.495303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.495342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.495378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.495661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.495701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.495739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.495778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.496045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.496061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.498498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.498867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.499459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.500711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.502499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.504157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.505082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.506337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.506587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.506603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.509200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:51.999 [2024-07-26 13:33:32.509565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.000 [2024-07-26 13:33:32.511179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.000 [2024-07-26 13:33:32.512639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.000 [2024-07-26 13:33:32.514383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.000 [2024-07-26 13:33:32.515091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.000 [2024-07-26 13:33:32.516415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.000 [2024-07-26 13:33:32.517904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.000 [2024-07-26 13:33:32.518159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.000 [2024-07-26 13:33:32.518176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.520895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.522131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.523459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.524942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.525977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.527356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.528882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.530346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.530601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.530618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.534551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.535803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.537284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.538789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.540630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.542257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.543790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.545244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.545621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.545638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.549226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.550713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.259 [2024-07-26 13:33:32.552190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.552948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.554432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.555913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.557396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.557831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.558276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.558293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.562018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.563488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.564837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.566075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.567846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.569333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.570342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.570704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.571117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.571134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.574668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.576156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.576847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.578102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.579916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.581555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.581914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.582274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.582658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.582674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.586068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.587206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.588704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.590074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.591818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.592535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.592899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.593264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.593741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.593758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.597064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.597962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.599212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.600704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.602264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.602625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.602982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.603343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.603747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.603764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.606116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.607589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.609177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.610659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.611297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.611660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.612018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.612379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.612764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.612780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.615544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.616796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.618294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.619781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.620567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.620929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.621304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.621664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.621915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.621931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.624748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.626237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.627736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.628764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.629524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.629884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.630246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.631439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.631722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.631739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.634796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.636392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.637991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.638358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.639109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.639474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.640144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.641381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.641632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.641648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.644710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.646182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.260 [2024-07-26 13:33:32.646836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.647200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.647962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.648332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.649926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.651384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.651637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.651653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.654751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.655937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.656302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.656661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.657396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.658492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.659732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.661217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.661470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.661486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.664523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.664891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.665294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.665657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.666773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.668022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.669509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.671004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.671283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.671300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.673446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.673811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.674173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.674531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.676545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.678102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.679676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.681343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.681691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.681707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.683555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.683915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.684277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.684638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.686119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.687605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.689060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.689842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.690097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.690113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.692040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.692405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.692764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.693634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.695472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.696952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.698327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.699555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.699847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.699864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.701945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.702319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.702684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.704142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.705880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.707374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.708079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.709346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.709596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.709612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.711848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.712214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.713640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.714922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.716661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.717496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.719017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.720602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.720854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.720870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.723275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.724092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.725320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.726797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.728514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.729660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.730893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.732378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.732630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.732646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.735223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.736634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.738187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.739675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.261 [2024-07-26 13:33:32.740617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.741871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.743356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.744847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.745135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.745156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.748787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.750040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.751529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.752649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.754394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.755722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.757199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.758714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.759145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.759162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.762688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.764156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.765641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.766770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.768267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.769732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.771213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.772022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.772482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.772499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.775871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.777361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.778914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.779799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.781614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.783159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.262 [2024-07-26 13:33:32.783871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.784247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.784638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.784654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.788001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.789275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.790655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.792144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.793889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.794459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.794822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.795186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.795618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.795636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.798690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.799528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.800771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.802251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.803903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.804267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.804625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.804985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.805400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.805418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.808135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.809373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.810785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.811469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.812269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.812628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.523 [2024-07-26 13:33:32.812985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.813367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.813737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.813753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.816472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.816845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.817208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.817566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.818343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.818709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.819069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.819431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.819839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.819855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.822441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.822807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.823172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.823219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.823959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.824326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.824683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.825041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.825469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.825486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.827984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.828353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.828717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.829078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.829121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.829556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.829921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.830281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.830658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.831025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.831487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.831504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.833673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.833719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.833780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.833832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.834278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.834325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.834364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.834401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.834440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.834862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.834879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.837039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.837082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.837131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.837174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.837624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.837671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.837711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.837753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.837794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.838154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.838170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.840341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.840383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.840420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.840460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.840877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.840924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.840963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.841001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.841040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.841467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.841486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.843650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.843692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.843730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.843768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.844143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.844191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.844229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.844268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.844305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.844700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.844717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.846899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.846943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.846981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.847034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.847478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.847530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.847572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.847610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.847649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.848032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.848050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.850284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.524 [2024-07-26 13:33:32.850336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.850378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.850415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.850815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.850863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.850904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.850945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.850983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.851353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.851369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.853651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.853695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.853732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.853770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.854137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.854201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.854253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.854305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.854343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.854693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.854709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.856958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.857001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.857043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.857081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.857443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.857501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.857540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.857591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.857630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.858093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.858109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.860539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.860597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.860654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.860694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.861079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.861151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.861193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.861232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.861270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.861646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.861662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.863863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.863905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.863962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.864001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.864375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.864446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.864486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.864524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.864562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.864981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.864998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.867212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.867256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.867295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.867334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.867744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.867790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.867832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.867870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.867912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.868321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.868338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.870677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.870720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.870762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.870800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.871242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.871288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.871327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.871368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.871407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.871828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.871845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.873947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.873992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.874031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.874071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.874480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.874531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.874570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.874608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.874646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.875061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.875077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.877195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.877238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.877276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.877315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.877757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.525 [2024-07-26 13:33:32.877803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.877847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.877886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.877924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.878325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.878342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.880621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.880664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.880701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.880739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.881161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.881208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.881248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.881287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.881326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.881638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.881654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.883865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.883908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.883950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.883989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.884403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.884462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.884502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.884559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.884611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.885007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.885024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.887388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.887431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.887469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.887511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.887883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.887944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.887984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.888043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.888082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.888483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.888500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.890945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.891001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.891063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.891112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.891490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.891565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.891607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.891645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.891683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.892093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.892110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.894355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.894397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.894447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.894486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.894859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.894929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.894969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.895007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.895045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.895468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.895485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.897650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.897697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.897739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.897777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.898191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.898242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.898282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.898320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.898358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.898762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.898779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.901122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.901168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.901206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.901244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.901653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.901703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.901744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.901783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.901822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.902228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.902244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.904127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.904173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.904210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.904247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.904661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.904725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.904765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.904803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.526 [2024-07-26 13:33:32.904840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.905255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.905273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.907250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.907292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.907329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.907366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.907615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.907667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.907705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.907742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.907780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.908022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.908037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.909552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.909603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.909647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.909685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.909932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.909985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.910025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.910063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.910101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.910519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.910535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.912651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.912699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.912737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.912777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.913025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.913074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.913121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.913167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.913205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.913450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.913466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.915027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.915069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.915106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.915148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.915394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.915446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.915484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.915521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.915558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.915982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.915999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.918163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.918206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.918242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.918280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.918580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.918632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.918671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.918709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.918746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.918988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.919004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.920566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.920607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.920663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.920701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.920949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.921012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.921051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.921089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.921126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.921407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.921424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.923849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.923895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.923935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.923972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.924222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.924277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.924323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.924360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.924398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.924643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.924658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.926160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.926201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.926238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.926275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.926524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.926575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.926614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.926652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.926689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.926932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.926948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.929240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.527 [2024-07-26 13:33:32.929282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.930350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.930393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.930701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.930755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.930793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.930831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.930868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.931112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.931128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.932677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.932722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.932768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.934253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.934503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.934557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.934598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.934637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.934675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.935039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.935055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.938385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.939866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.941354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.942057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.942311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.943744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.945259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.946863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.947232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.947640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.947656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.951030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.952516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.953556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.954443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.954750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.956257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.957747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.958837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.959199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.959606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.959623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.962973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.964456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.965165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.966448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.966698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.968197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.969776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.970147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.970505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.970873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.970888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.974079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.975334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.976664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.977904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.978157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.979667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.980517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.980875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.981241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.981699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.981716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.984710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.985463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.986713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.988202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.988451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.528 [2024-07-26 13:33:32.989904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:32.990268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:32.990630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:32.990989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:32.991400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:32.991417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:32.993771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:32.995435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:32.996917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:32.998489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:32.998742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:32.999310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:32.999669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.000027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.000387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.000721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.000737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.003000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.004250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.005726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.007211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.007511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.007882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.008256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.008632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.009047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.009300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.009317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.012207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.013761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.015250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.016607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.016984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.017353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.017713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.018071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.019441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.019727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.019743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.022440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.023929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.025417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.025849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.026297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.026665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.027023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.027794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.029048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.029302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.029318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.032277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.033752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.034782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.035156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.035556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.035919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.036280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.037851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.039400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.039650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.039665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.042633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.043968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.044332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.044692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.045097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.529 [2024-07-26 13:33:33.045467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.046878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.048150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.049640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.049892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.049908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.052884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.053262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.053622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.053979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.054390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.055303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.056545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.058036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.059504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.059790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.059807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.062069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.062452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.062810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.063174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.063573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.065012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.066597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.068098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.069528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.069847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.069863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.071625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.071985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.072347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.072705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.072952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.074196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.075662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.077144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.077837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.078085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.078101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.079938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.080303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.080660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.081600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.081921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.083424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.084910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.086002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.087491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.087800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.087819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.089853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.090221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.090813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.092063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.092317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.093911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.095530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.096490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.097743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.097995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.098011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.100201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.100563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.102068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.103411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.103662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.105146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.105846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.107250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.108770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.109021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.109036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.111356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.112420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.113652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.115118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.115372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.116561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.117967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.119236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.120711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.120961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.120977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.123644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.124895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.126388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.127867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.128116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.129037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.130294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.131774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.133257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.133549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.133565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.137399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.138769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.140188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.141655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.142012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.143454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.145014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.146497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.147875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.148232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.148249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.151579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.153063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.154537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.155324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.155575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.156834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.158319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.159797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.160187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.160631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.160648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.164187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.788 [2024-07-26 13:33:33.165672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.167027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.168265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.168586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.170078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.171542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.172519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.172878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.173309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.173326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.176811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.178296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.179009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.180259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.180509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.182077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.183534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.183894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.184255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.184667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.184683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.187827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.188761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.190363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.191910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.192166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.193666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.194158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.194517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.194877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.195318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.195335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.198326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.199451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.200694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.202179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.202429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.203509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.203871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.204231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.204589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.205011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.205028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.207711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.209235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.210707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.211074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.211531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.211900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.212262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.212623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.212990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.213356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.213373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.215917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.216284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.216643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.217013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.217455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.217824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.218193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.218552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.218909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.219364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.219381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.221919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.222289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.222652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.223016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.223377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.223746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.224103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.224464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.224828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.225163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.225180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.227784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.228154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.228530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.228887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.229356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.229724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.230082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.230452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.230817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.231224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.231241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.233853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.234217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.234576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.234936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.235290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.235660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.236018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.236380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.236737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.237076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.237093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.239620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.239996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.240048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.240414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.240878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.241249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.241608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.241968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.242351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.242727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.242744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.245304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.245669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.246027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.246070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.246501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.246865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.247236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.247599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.247963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.248369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.248387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.250639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.250696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.250735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.250773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.251231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.251279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.251319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.251357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.251394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.251775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.251791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.253953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.253996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.254034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.254073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.254482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.254528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.254578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.254616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.254654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.255113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.255130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.257364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.257408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.257447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.257485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.257886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.257933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.257971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.258012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.258050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.258464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.258482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.260849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.260892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.260929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.260967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.261407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.261455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.261494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.261533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.261571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.261911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.261927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.789 [2024-07-26 13:33:33.264045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.264088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.264127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.264172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.264590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.264641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.264708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.264747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.264807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.265173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.265190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.267477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.267524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.267562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.267602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.267937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.267994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.268035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.268073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.268111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.268447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.268464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.270775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.270829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.270868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.270924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.271329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.271387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.271440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.271491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.271530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.271902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.271919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.274007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.274049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.274091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.274129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.274472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.274529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.274569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.274619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.274658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.275101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.275121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.277303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.277359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.277407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.277448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.277816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.277866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.277905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.277944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.277983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.278400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.278418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.280547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.280591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.280632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.280669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.281101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.281152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.281193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.281231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.281270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.281637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.281653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.283787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.283829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.283869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.283908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.284321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.284367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.284406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.284449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.284489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.284959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.284979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.287091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.287134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.287179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.287218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.287605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.287651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.287689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.287728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.287766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.288190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.288207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.290387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.290432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.290473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.290511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.290958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.291008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.291048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.291086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.291124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.291510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.291527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.293711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.293752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.293791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.293832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.294259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.294310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.294352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.294402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.294443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.294692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.294708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.296728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.296772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.296810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.296848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.297221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.297269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.297307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.297346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.297384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.297824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.297841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.300044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.300099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.300156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.300195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.300665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.300711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.790 [2024-07-26 13:33:33.300751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.300790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.300828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.301159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.301175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.302565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.302608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.302650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.302688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.302932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.302984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.303022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.303066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.303103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.303352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.303369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.305179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.305221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.305259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.305297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.305685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.305732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.305770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.305808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.305846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.306284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.306305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.307714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.307755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.307792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.307829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.308253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.308308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.308347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.308385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.308422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.308707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.308724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.310285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.310327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.310368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.310406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.310813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.310859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.310898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.310936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.310985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.311401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:52.791 [2024-07-26 13:33:33.311419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.313091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.313133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.313174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.313212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.313457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.313513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.313551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.313595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.313635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.313943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.313959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.315366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.315408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.315447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.315484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.315912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.315958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.316000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.316039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.316077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.316493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.316510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.318461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.318505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.318552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.318590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.318833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.318890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.318931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.318969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.319006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.319262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.050 [2024-07-26 13:33:33.319279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.320723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.320764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.320804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.320842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.321237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.321305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.321356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.321396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.321435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.321852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.321869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.323844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.323892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.323929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.323967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.324215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.324271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.324313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.324350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.324388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.324629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.324645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.326178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.326220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.326256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.326294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.326535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.326587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.326626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.326673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.326712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.327112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.327128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.329336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.329377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.329414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.329451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.329750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.329802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.329840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.329878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.329915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.330161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.330178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.331711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.331753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.331802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.331842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.332090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.332145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.332188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.332226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.332264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.332515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.332532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.334758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.334802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.334841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.334879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.335123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.335177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.335216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.335261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.335300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.335546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.335562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.337084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.337130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.337171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.337209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.337453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.337506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.337545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.337582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.337619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.337861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.337877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.339920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.339970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.340335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.051 [2024-07-26 13:33:33.340379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.340627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.340680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.340718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.340762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.340799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.341044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.341060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.342586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.342627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.342664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.344135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.344386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.344439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.344478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.344515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.344553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.344888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.344905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.349161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.350676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.352273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.353882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.354213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.355469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.356963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.358457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.359628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.360038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.360057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.363414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.364913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.366400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.367104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.367358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.368646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.370143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.371635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.371999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.372431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.372450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.375851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.377345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.378616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.379924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.380223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.381727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.383206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.384092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.384465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.384899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.384916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.388329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.389802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.390479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.391730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.391982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.393577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.395210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.395577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.395936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.396278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.396296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.399636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.400795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.402219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.403490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.403741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.405255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.406001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.406364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.406722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.407179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.407197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.410204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.410896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.052 [2024-07-26 13:33:33.412127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.413612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.413868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.415413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.415775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.416134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.416500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.416934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.416951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.419492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.421054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.422472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.423979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.424236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.424850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.425216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.425574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.425933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.426278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.426295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.428521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.429790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.431263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.432746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.433037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.433415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.433777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.434136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.434506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.434758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.434774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.437766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.439366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.440863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.442276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.442647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.443012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.443375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.443735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.444997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.445315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.445331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.448032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.449511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.451007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.451590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.452050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.452422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.452788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.453185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.454556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.454811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.454828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.457981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.459466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.460815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.461178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.461600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.461964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.462327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.463625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.464863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.465115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.465131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.468068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.469543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.470098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.470462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.470877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.471259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.471650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.473035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.474560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.474813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.474829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.477897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.479103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.479471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.479830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.053 [2024-07-26 13:33:33.480279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.480647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.482157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.483527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.485011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.485268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.485284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.488272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.488640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.489000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.489366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.489804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.490485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.491741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.493151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.494629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.494884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.494900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.497360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.497730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.498087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.498450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.498842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.500273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.501820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.503304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.504726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.505031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.505047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.506797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.507168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.507530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.507891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.508184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.509428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.510911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.512395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.513126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.513384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.513401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.515287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.515650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.516008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.516962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.517259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.518764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.520245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.521431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.522800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.523082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.523098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.525055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.525424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.525785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.527437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.527690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.529186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.530675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.531370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.532619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.532871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.532887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.535150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.535513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.536740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.537978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.538237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.539726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.540716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.542309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.543761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.544013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.544029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.546437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.546805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.548236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.549819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.550075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.551563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.552349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.553605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.555086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.555345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.555361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.557761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.559207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.560492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.561979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.054 [2024-07-26 13:33:33.562242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.055 [2024-07-26 13:33:33.562972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.055 [2024-07-26 13:33:33.564395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.055 [2024-07-26 13:33:33.565967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.055 [2024-07-26 13:33:33.567456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.055 [2024-07-26 13:33:33.567709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.055 [2024-07-26 13:33:33.567725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.055 [2024-07-26 13:33:33.570416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.055 [2024-07-26 13:33:33.571664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.055 [2024-07-26 13:33:33.573054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.574416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.574707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.575952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.577340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.578479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.578837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.579268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.579286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.582656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.584151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.584584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.586088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.586346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.587871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.589351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.589726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.590091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.590535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.590552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.593056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.593429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.593789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.594156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.594595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.594971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.595335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.595698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.596062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.596436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.596454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.599022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.599396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.599762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.600124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.600569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.600937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.601302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.601663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.602026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.602485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.602502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.605005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.605377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.605737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.606105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.606520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.606889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.607258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.607630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.607989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.608450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.608471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.611007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.611377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.611739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.317 [2024-07-26 13:33:33.612118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.612582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.612951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.613314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.613671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.614032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.614494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.614511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.617093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.617475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.617840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.618202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.618588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.618954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.619321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.619685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.620052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.620471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.620488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.622960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.623327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.623688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.624045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.624391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.624760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.625121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.625495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.625867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.626297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.626315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.628882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.629248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.629615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.629979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.630416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.630786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.631159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.631520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.631884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.632223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.632241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.635021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.635403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.635451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.635807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.636175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.636544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.636908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.637285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.637648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.638071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.638088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.640585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.640948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.641315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.641362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.641780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.642161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.642528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.642887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.643252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.643630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.643646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.645739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.645785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.645822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.645861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.646281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.646329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.646384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.646435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.646474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.646915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.646932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.649023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.649067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.649107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.649151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.649512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.649563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.649602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.649640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.318 [2024-07-26 13:33:33.649677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.650086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.650103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.652227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.652272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.652311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.652355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.652754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.652811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.652852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.652890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.652929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.653344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.653363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.655611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.655654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.655692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.655730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.656171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.656218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.656257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.656297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.656335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.656654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.656673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.658807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.658853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.658891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.658932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.659346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.659395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.659451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.659501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.659552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.659903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.659920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.662174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.662218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.662257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.662297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.662649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.662723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.662766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.662805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.662844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.663162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.663180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.665424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.665468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.665506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.665548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.665905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.665962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.666006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.666058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.666099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.666483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.666500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.668925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.668982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.669052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.669097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.669543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.669607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.669658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.669697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.669736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.670109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.670125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.672302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.672346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.672383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.672422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.672773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.672833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.672884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.672931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.672972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.673224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.673241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.675475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.675518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.675556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.675598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.675951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.676013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.676053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.319 [2024-07-26 13:33:33.676092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.676132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.676501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.676518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.678853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.678896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.678937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.678974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.679227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.679280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.679324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.679365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.679403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.679653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.679669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.681193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.681244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.681285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.681322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.681569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.681620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.681659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.681697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.681735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.681979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.681996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.684275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.684319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.684356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.684397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.684731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.684783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.684822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.684859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.684896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.685240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.685256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.686736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.686778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.686822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.686861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.687117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.687170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.687209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.687267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.687306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.687552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.687568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.689571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.689614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.689652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.689691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.690100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.690152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.690193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.690230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.690290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.690539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.690555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.692044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.692086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.692126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.692169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.692453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.692506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.692545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.692583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.692620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.692864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.692880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.694855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.694903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.694956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.695005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.695471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.695523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.695564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.695603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.695640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.696016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.696032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.697452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.697501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.697539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.320 [2024-07-26 13:33:33.697577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.697852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.697900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.697940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.697978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.698015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.698303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.698319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.700135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.700185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.700223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.700261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.700621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.700678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.700718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.700755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.700792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.701194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.701215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.702711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.702752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.702793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.702830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.703151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.703206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.703249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.703291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.703329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.703575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.703592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.705235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.705281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.705321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.705360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.705773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.705822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.705862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.705901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.705950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.706378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.706395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.708040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.708083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.708120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.708183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.708434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.708479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.708527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.708566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.708610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.708921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.708937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.710491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.710538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.710594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.710633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.711081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.711128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.711174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.711214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.711253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.711616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.711633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.713464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.713509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.713550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.713588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.713837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.713889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.713928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.713966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.714004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.714334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.714352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.715812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.715855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.715902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.715941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.716275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.716339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.716378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.716416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.716456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.716868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.321 [2024-07-26 13:33:33.716886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.718740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.718782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.718819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.718856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.719101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.719158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.719197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.719235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.719273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.719518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.719534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.721198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.721242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.721280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.721317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.721585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.721637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.721677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.721716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.721765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.722196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.722214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.724297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.724339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.724385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.724427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.724677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.724723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.724761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.724822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.724860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.725103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.725119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.726737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.726780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.726817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.726855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.727101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.727162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.727201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.727238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.727275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.727641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.727657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.730209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.730252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.730289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.730326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.730630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.730684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.730724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.730762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.730801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.731045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.731062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.732596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.732640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.734124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.734172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.734422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.734477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.734515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.734564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.322 [2024-07-26 13:33:33.734604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.734933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.734949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.737108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.737155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.737193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.738438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.738690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.738743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.738782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.738820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.738858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.739106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.739122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.742176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.742547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.742907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.743273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.743687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.744449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.745679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.747146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.748619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.748875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.748892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.751276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.751660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.752021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.752390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.752805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.754365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.755772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.757253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.758826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.759300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.759317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.761079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.761451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.761813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.762180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.762469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.763718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.765195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.766689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.767589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.767840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.767856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.769729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.770095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.770461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.771071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.771328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.772856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.774497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.776070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.776434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.776683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.776700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.778659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.779028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.779400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.779910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.780165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.781591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.783101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.784681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.785568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.785853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.785869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.787814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.788185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.788546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.789881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.790179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.791678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.793168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.794036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.795627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.795876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.795892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.798035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.798409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.799169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.800401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.800659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.802172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.323 [2024-07-26 13:33:33.803577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.804742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.805981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.806235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.806252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.808680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.809049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.810631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.812189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.812440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.813949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.814647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.815893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.817382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.817634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.817650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.820064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.821257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.822497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.823963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.824220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.825278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.826840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.828254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.829743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.829994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.830010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.832671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.833958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.835439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.836916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.837172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.838047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.324 [2024-07-26 13:33:33.839286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.587 [2024-07-26 13:33:33.840771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.587 [2024-07-26 13:33:33.842270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.587 [2024-07-26 13:33:33.842560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.587 [2024-07-26 13:33:33.842576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.587 [2024-07-26 13:33:33.846359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.587 [2024-07-26 13:33:33.847644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.587 [2024-07-26 13:33:33.849136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.587 [2024-07-26 13:33:33.850634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.850991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.852413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.853966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.855454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.856825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.857159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.857176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.860586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.862082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.863572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.864418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.864671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.865917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.867397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.868879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.869328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.869769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.869792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.873432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.874998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.876460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.877601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.877895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.879381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.880859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.881936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.882303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.882714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.882731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.886096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.887588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.888276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.889563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.889813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.891299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.892866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.893238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.893598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.893942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.893958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.897280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.898551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.899885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.901134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.901389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.902891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.903727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.904095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.904472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.904923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.904939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.908014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.908782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.910026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.911504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.911753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.913246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.913612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.913971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.914342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.914756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.914774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.917071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.918621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.920226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.921834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.922085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.922537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.922897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.923264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.923624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.923963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.923978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.926471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.927722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.929209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.930697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.931087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.588 [2024-07-26 13:33:33.931468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.931828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.932209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.932580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.932830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.932846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.935658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.937123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.938614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.939879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.940281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.940650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.941011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.941374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.942214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.942464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.942480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.945469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.946300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.946668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.947027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.947469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.947837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.949452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.950924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.952552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.952805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.952821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.955162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.955800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.956168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.956531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.956968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.957347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.957711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.958081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.958447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.958834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.958850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.961442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.961808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.962173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.962534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.962934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.963321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.963683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.964043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.964407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.964807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.964823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.967333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.967701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.968069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.968437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.968837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.969212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.969572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.969933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.970304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.970645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.970662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.973198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.973571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.973932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.974299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.974714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.975081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.975451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.975815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.976185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.976550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.976566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.979068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.979439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.979799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.980164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.980503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.980873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.981240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.981600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.981964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.982305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.982322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.984811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.985182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.985547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.985912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.986332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.986701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.589 [2024-07-26 13:33:33.987061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.987431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.987796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.988234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.988251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.990820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.991193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.991555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.991919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.992325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.992702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.993075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.993447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.993807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.994233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.994252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.996539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.996907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.998505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.998872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.999292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:33.999660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.000032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.001619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.001988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.002411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.002429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.004822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.006432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.006799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.007244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.007496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.007866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.008249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.008613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.009111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.009367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.009384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.011888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.012263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.012707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.014036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.014474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.014842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.016371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.016731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.017091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.017471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.017489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.020964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.021336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.021382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.021743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.022120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.022649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.023910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.024274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.025040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.025302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.025319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.028659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.029023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.029852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.029898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.030194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.030566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.030927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.031295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.032346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.032637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.032653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.034802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.034847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.034885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.034925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.035240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.035298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.035338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.035377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.035432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.035796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.035813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.037640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.037682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.037728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.037778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.038211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.590 [2024-07-26 13:33:34.038258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.038300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.038338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.038376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.038727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.038744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.040834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.040878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.040920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.040959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.041265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.041315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.041354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.041391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.041428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.041767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.041783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.043823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.043866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.043903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.043942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.044382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.044439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.044479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.044518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.044556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.044951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.044968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.046994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.047038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.047090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.047157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.047533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.047592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.047630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.047668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.047705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.047977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.047997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.050049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.050095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.050133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.050178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.050520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.050578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.050618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.050656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.050708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.051110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.051127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.052976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.053025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.053074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.053114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.053547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.053594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.053634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.053673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.053711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.054099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.054115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.055872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.055914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.055955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.055992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.056357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.056413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.056454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.056492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.056535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.056928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.056946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.058916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.058979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.059018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.059082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.059457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.059521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.059562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.059600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.059638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.060032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.060049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.062219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.062266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.062304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.062342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.062686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.062742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.591 [2024-07-26 13:33:34.062781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.062821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.062860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.063270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.063289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.065260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.065302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.065340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.065383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.065629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.065680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.065727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.065769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.065806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.066052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.066069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.067604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.067646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.067687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.067724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.068112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.068183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.068236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.068276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.068313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.068726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.068747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.070768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.070813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.070850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.070887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.071131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.071191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.071229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.071267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.071305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.071550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.071567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.073122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.073167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.073211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.073256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.073500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.073552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.073593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.073632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.073670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.074008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.074024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.076214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.076256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.076293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.076330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.076607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.076659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.076697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.076735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.076772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.077014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.077031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.078610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.078653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.078693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.078730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.078976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.079027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.079065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.079102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.079144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.079507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.592 [2024-07-26 13:33:34.079524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.082090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.082137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.082197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.082235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.082481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.082543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.082585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.082623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.082659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.082920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.082937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.087574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.087622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.087661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.087700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.088070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.088127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.088172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.088210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.088248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.088651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.088668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.092637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.092683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.092728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.092767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.093014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.093061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.093099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.093152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.093190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.093438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.093454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.097394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.097449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.097490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.097527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.097821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.097875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.097914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.097952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.097992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.098242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.098259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.102860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.102909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.102947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.102984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.103399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.103460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.103501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.103540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.103579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.103994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.104013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.107474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.107529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.107569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.107607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.107894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.107943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.107982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.108022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.108059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.108370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.593 [2024-07-26 13:33:34.108386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.112144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.112192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.112231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.112269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.112641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.112690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.112728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.112765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.112802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.113095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.113112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.117434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.117481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.117521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.117561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.117809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.117860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.117898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.117942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.117980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.118303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.118320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.121446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.121494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.121531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.121569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.121818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.121873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.121912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.121950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.121996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.122346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.122363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.126762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.126810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.126848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.126886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.127295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.127352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.127393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.127432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.127471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.127876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.127893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.132569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.132615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.132653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.132691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.132935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.132987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.133026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.133062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.854 [2024-07-26 13:33:34.133100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.133346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.133361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.136670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.136718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.137062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.137108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.137151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.137395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.162924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.162987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.164452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.172532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.174040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.174096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.175547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.175600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.177219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.177470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.177486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.180558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.181812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.183284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.184774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.186240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.187496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.188968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.190442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.190761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.190777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.194772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.196412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.197948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.199403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.200965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.202474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.203957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.205032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.205411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.205428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.208755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.210192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.211665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.212372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.214095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.215674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.217341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.217703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.218097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.218113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.221570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.223059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.224103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.225657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.227461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.228952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.229652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.230010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.230417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.230434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.233832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.235427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.236457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.237703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.239433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.240676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.241040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.241399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.241808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.241825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.244938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.245659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.246910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.248375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.250197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.250566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.250923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.251289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.251692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.251709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.254168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.255824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.855 [2024-07-26 13:33:34.257306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.258874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.259749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.260109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.260472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.260830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.261180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.261197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.263744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.264996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.266482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.267963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.268699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.269056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.269421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.269919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.270171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.270187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.272963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.274435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.275920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.277038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.277774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.278133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.278495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.279990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.280302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.280320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.283339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.284959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.286549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.286908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.287665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.288024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.289044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.290293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.290543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.290559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.293556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.295038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.295539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.295898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.296637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.297256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.298515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.300002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.300262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.300281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.303348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.304213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.304587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.304946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.305730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.307152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.308692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.310166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.310414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.310430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.313214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.313580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.313938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.314315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.316122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.317378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.318852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.320339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.320704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.320721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.322506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.322870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.323234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.323593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.325125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.326514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.327684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.329115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.329444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.329463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.331530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.333067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.856 [2024-07-26 13:33:34.333437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.333903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.335443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.336929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.338409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.339203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.339502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.339518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.341402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.341762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.342121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.343431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.345249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.346722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.347650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.348970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.349270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.349286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.351263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.351625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.351983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.352349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.353089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.353451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.353809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.354174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.354552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.354570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.357074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.357462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.357828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.358193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.358899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.359265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.359632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.359997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.360410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.360428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.362860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.363229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.363587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.363946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.364628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.364990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.365354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.365713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.366111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.366128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.368591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.368952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.369318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.369682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.370441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.370799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.371161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.371523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.371877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.371897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.374486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.374854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.375231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.375589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.376414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.376783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.377153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.377517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.377927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:53.857 [2024-07-26 13:33:34.377945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.380570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.380935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.381299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.381661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.382387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.382750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.383106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.383470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.383853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.383870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.386392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.386758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.387124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.387491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.388235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.388594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.388952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.389318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.119 [2024-07-26 13:33:34.389705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.389726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.392258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.392626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.392984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.393346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.394101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.394469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.394829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.395188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.395575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.395591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.397982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.398348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.398707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.399067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.399748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.400108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.400471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.400829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.401164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.401182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.403646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.404011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.404388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.404751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.405508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.405863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.406233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.406601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.407028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.407044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.409546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.409922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.410286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.410645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.411408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.411774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.412133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.412494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.412926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.412943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.415396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.415444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.415799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.416163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.416984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.417349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.417706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.418063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.418460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.418477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.421816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.422187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.422547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.422591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.423303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.423349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.424178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.424222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.424501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.424518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.426762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.426808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.427170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.427223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.427893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.427944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.428308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.428350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.428759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.428775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.431338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.431385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.431741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.431776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.432817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.432862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.434106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.434153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.434401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.434417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.435912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.435954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.435992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.436029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.437759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.437804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.438310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.120 [2024-07-26 13:33:34.438353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.438791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.438809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.440788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.440840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.440877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.440915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.441203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.441244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.441282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.441320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.441562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.441578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.443137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.443182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.443219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.443257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.443540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.443580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.443624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.443665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.444040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.444056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.446191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.446232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.446269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.446307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.446656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.446697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.446735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.446772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.447015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.447031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.448558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.448600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.448649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.448691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.448965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.449015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.449053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.449090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.449338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.449355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.451557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.451603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.451641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.451680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.451961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.452001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.452047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.452085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.452336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.452352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.453865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.453911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.453949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.453989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.454278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.454320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.454358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.454396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.454642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.454658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.456716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.456762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.456804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.456842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.457281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.457326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.457364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.457401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.457652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.457668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.459179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.459220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.459257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.459295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.459622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.459662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.459700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.459738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.459981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.459997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.462007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.462061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.462100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.462142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.462612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.462656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.121 [2024-07-26 13:33:34.462695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.462733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.463084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.463100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.464496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.464544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.464582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.464628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.464907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.464948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.464986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.465031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.465280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.465297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.467010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.467052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.467090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.467130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.467515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.467556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.467595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.467632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.468034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.468051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.469545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.469587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.469624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.469662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.470050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.470092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.470134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.470175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.470422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.470439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.471964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.472005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.472042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.472084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.472521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.472563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.472603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.472641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.473007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.473024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.474720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.474761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.474798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.474839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.475121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.475166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.475205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.475242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.475674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.475690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.477107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.477155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.477194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.477233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.477659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.477700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.477739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.477779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.478193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.478210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.480115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.480161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.480199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.480236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.480523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.480564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.480609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.480650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.480894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.480910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.482447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.482489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.482525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.482563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.482883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.482925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.482964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.483009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.483474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.483492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.485513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.485562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.485600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.485644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.485920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.122 [2024-07-26 13:33:34.485973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.486014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.486052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.486300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.486317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.487853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.487895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.487932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.487969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.488264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.488305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.488343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.488380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.488767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.488783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.491173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.491215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.491252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.491289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.491614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.491654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.491692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.491729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.491973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.491989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.493469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.493510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.493547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.493585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.493863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.493904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.493942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.493986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.494237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.494253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.496497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.496539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.496578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.496975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.497016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.497057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.497337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.498839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.498881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.498918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.498961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.501958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.504256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.506585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.551578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.553057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.553109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.554583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.554635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.555077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.555513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.555531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.564646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.566308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.567848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.568211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.568618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.568636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.571982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.573460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.574659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.576034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.577803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.579279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.580169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.580539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.580957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.580974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.584368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.585837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.586526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.587874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.589609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.590338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.591520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.591879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.592302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.592320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.595653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.597144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.597980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.599514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.601249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.123 [2024-07-26 13:33:34.602740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.603218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.603576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.603952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.603968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.607281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.608937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.609942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.611184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.612919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.614183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.614542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.614904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.615345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.615362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.618488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.619290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.620796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.622439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.624178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.624662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.625020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.625382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.625814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.625831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.628951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.629950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.631202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.632687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.634185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.634546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.634903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.635264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.635672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.635691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.637915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.639449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.124 [2024-07-26 13:33:34.641081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.642667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.643410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.643771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.644127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.644489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.644817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.644837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.647158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.648401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.649887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.651373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.652038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.652402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.652762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.653122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.653377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.653393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.656416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.658030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.659645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.661150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.661917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.662282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.662639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.663739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.664028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.664044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.666736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.668225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.669705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.670476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.671286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.671644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.672004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.673458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.673710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.673730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.676662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.677070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.677435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.677793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.678853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.680094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.681598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.683084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.683344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.683361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.685937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.686312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.686672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.687032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.688896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.690213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.691730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.693264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.693638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.693655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.695688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.386 [2024-07-26 13:33:34.696052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.696415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.696772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.697537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.697909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.698274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.698632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.699044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.699060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.701500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.701863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.702231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.702594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.703350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.703709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.704065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.704431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.704754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.704771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.707194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.707560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.707917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.708279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.709059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.709427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.709792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.710163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.710581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.710598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.713181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.713545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.713904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.714271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.714975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.715341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.715698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.716057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.716443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.716460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.719102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.719471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.719854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.720223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.720986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.721352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.721711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.722077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.722450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.722467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.724955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.725323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.725681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.726038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.726802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.727171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.727531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.727886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.728294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.728311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.730681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.731042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.731403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.731769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.732508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.732867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.733239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.733600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.733936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.733953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.736407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.736785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.737156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.737517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.738288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.738653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.739022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.739406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.739846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.739864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.742262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.742626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.742985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.743351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.744083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.744489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.744849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.745210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.745644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.745661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.387 [2024-07-26 13:33:34.748167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.748527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.748888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.749257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.750018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.750380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.750737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.751105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.751481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.751498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.754453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.754827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.755192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.755550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.756325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.756696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.757067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.757431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.757795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.757811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.760310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.760673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.761031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.761075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.761752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.762114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.762474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.762832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.763246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.763264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.765742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.765789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.766153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.766205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.767270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.767324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.768770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.768818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.769230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.769246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.771817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.771879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.772245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.772603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.773321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.773367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.773724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.773770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.774093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.774110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.776298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.776660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.777030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.778590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.779408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.779464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.780962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.781324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.781717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.781733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.783135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.783973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.784016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.785252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.785543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.787032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.788099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.788144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.788580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.788597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.790611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.791925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.791968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.793548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.794508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.795791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.795834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.797309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.797564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.797580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.799390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.800203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.800246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.801450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.801931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.802296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.802340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.803613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.803864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.388 [2024-07-26 13:33:34.803880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.805471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.806967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.807011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.807039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.807325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.808483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.808842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.808884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.809296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.809314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.811270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.811311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.811354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.811400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.811677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.811739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.811779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.811817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.812064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.812080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.813557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.813598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.813635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.813672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.814049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.814105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.814151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.814189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.814435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.814451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.816470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.816515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.816553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.816592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.816958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.816999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.817037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.817074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.817361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.817378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.818855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.818897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.818943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.818983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.819272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.819313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.819358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.819400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.819647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.819664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.821685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.821727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.821764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.821808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.822254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.822297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.822337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.822383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.822628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.822643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.824155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.824197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.824234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.824280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.824604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.824644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.824682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.824720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.824963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.824979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.826552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.826595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.826633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.826672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.827118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.827165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.827208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.827245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.827490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.827507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.829441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.829496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.829536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.829574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.829858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.829899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.829937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.389 [2024-07-26 13:33:34.829974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.830264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.830282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.831693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.831734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.831771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.831812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.832240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.832282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.832321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.832358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.832791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.832809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.834783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.834825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.834862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.834899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.835188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.835241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.835279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.835316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.835559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.835575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.837065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.837107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.837156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.837198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.837472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.837520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.837559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.837597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.837900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.837916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.839790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.839831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.839869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.839909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.840351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.840394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.840448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.840489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.840735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.840751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.842257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.842299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.842336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.842376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.842719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.842764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.842802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.842840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.843084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.843100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.845097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.845155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.845193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.845231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.845704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.845746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.845785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.845823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.846163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.846180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.847568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.847611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.847648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.847687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.847963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.848003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.848042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.848087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.848340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.848356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.849853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.849898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.849935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.849972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.850423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.850465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.850508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.850547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.850855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.850871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.390 [2024-07-26 13:33:34.852686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.852728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.852766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.852806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.853088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.853129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.853171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.853209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.853455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.853471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.855004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.855054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.855095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.855132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.855420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.855462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.855501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.855541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.855933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.855949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.858030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.858079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.858117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.858165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.858440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.858489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.858532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.858573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.858818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.858834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.860361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.860403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.860440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.860478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.860759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.860800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.860838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.860875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.861218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.861235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.863092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.863147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.863186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.863227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.863679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.863721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.863761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.863799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.864096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.864112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.865542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.865586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.865624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.865666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.865949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.865991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.866037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.866097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.866347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.866364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.868187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.868229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.868268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.868306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.868697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.868738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.868777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.868815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.869225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.869243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.870658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.870699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.870736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.870774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.871164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.871206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.871244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.871281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.391 [2024-07-26 13:33:34.871560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.871576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.873052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.873094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.873806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.873850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.874308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.874351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.874390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.874440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.874693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.874709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.878067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.878113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.879588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.879633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.880048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.881277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.881344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.882805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.883053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.883068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.885436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.885483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.885521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.887118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.887460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.888907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.888951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.890425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.890782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.890797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.892201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.892253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.892293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.893347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.893714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.894075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.894117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.894162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.894414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.894430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.896241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.897724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.897767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.899236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.901239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.901285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.901327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.902808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.903061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.903077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.906980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.908242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.392 [2024-07-26 13:33:34.908288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.909765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.910052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.910092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.910796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.910840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.911094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.911111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.912624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.913683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.913725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.913757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.915067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.915111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.915832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.915874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.916299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.916321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.922051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.922101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.923702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.923755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.925184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.925560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.925925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.925967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.926007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.926366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.926782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.926799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.929304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.930872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.932275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.933766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.934016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.934677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.936103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.936463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.937160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.937415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.937432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.941679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.942937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.944425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.945913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.946206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.946580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.946942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.947315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.947717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.947964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.947980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.950870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.952421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.953948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.955008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.955422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.956780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.957144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.957910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.958919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.959331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.959348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.964508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.966004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.967485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.968204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.968637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.969002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.969377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.969858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.971149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.971399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.971415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.974499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.975988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.977311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.978435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.978767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.979137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.980206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.980918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.981283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.981537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.981553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.986529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.988018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.654 [2024-07-26 13:33:34.988460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.988819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.989192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.989557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.990279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.991520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.992998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.993251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.993268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.996266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.997353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.998765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.999131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:34.999544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.000882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.001332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.001695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.003144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.003395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.003411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.009233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.009601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.009964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.010326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.010745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.011902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.013157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.014628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.016095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.016448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.016465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.018568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.020060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.020421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.021062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.021317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.021689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.022323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.023565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.024929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.025186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.025203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.029638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.030004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.031508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.032856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.033106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.034607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.035316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.036731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.038286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.038535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.038555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.040947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.042102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.042467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.043549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.043872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.045371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.046852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.047952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.049432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.049723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.049740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.053813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.055264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.056840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.058336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.058585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.059013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.060357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.061865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.063349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.063601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.063617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.067047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.067420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.067780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.068148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.068544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.068909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.069272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.069629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.069996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.070361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.070378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.075780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.076169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.655 [2024-07-26 13:33:35.076531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.076888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.077324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.077692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.078053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.078554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.079831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.080260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.080278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.082660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.083042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.083407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.083766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.084167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.084534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.084898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.085797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.086678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.087076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.087093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.090732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.091097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.091459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.091820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.092179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.093320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.093964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.094325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.095802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.096230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.096247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.098736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.099116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.099480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.099855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.100234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.101882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.102247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.102739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.104022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.104455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.104472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.107659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.108033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.108918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.109785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.110189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.111006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.111971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.112334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.112692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.113018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.113034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.115674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.116039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.117307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.117810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.118229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.119430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.120009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.120371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.120742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.121116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.121135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.125333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.125705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.126064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.127539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.127975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.128349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.128714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.129089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.129452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.129823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.129839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.133455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.133822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.134516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.135607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.136022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.136397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.136772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.137131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.137498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.137882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.137898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.142159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.142961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.656 [2024-07-26 13:33:35.143331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.143692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.144036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.144414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.144774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.145132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.145496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.145889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.145906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.149394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.149761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.150122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.150497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.150954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.151342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.151705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.152065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.152432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.152913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.152930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.156389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.156763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.157127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.157489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.157881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.158246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.158607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.158969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.160107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.160453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.160470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.162927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.162978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.163345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.163402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.163861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.164228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.164599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.164959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.165333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.165731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.165747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.172572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.172634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.173112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.173162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.173412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.173784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.173827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.174190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.174231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.174631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.657 [2024-07-26 13:33:35.174648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.177397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.177767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.177811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.179349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.179804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.180201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.180254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.180648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.180703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.181103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.181120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.185351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.186832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.186881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.187248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.187607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.930 [2024-07-26 13:33:35.188800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.188844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.189212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.190519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.190825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.190849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.192361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.193939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.193991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.195468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.195720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.195774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.196645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.197564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.197606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.198015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.198032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.202173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.203657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.203701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.204387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.204644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.206108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.207665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.207719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.209167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.209497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.209514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.211327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.211690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.212796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.212839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.213163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.213218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.214698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.214740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.216213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.216561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.216578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.221466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.221517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.222211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.222254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.222507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.222560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.222916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.223951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.223994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.224279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.224296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.225796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.225838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.225889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.225928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.226183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.226231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.226269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.226313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.226352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.226596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.226613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.230156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.230203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.230241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.230279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.230698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.230744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.230785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.230823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.230865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.231110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.231126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.232630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.232672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.232709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.232746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.233024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.233076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.233115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.233157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.233196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.233441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.233460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.237318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.237363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.237400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.237445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.237887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.237933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.931 [2024-07-26 13:33:35.237973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.238012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.238050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.238414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.238431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.239853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.239902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.239944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.239983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.240252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.240311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.240350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.240387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.240430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.240676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.240692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.245623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.245669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.245706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.245744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.245998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.246052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.246090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.246129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.246176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.246584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.246602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.248045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.248088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.248125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.248167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.248599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.248654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.248693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.248730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.248767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.249067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.249083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.253181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.253232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.253271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.253309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.253561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.253609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.253647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.253684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.253728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.254180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.254197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.255762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.255804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.255852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.255892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.256135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.256195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.256235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.256273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.256312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.256575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.256591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.260332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.260380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.260417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.260458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.260863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.260916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.260957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.260995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.261032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.261285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.261302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.263165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.263207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.263244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.263281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.263524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.263576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.263614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.263651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.263688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.264049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.264065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.268657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.268704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.268742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.268783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.269186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.269233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.269273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.932 [2024-07-26 13:33:35.269312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.269350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.269623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.269640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.271468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.271510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.271551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.271588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.271833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.271884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.271923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.271960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.272004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.272252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.272268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.276052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.276098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.276135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.276178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.276564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.276619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.276660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.276699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.276737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.277135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.277159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.278969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.279011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.279054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.279091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.279340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.279397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.279439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.279476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.279514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.279759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.279775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.283939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.283998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.284040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.284078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.284328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.284380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.284420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.284458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.284496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.284914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.284931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.286777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.286819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.286859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.286896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.287182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.287236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.287274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.287312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.287349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.287598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.287614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.292088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.292135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.292186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.292224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.292529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.292581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.292619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.292656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.292693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.293062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.293079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.295121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.295169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.295207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.295253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.295498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.295543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.295589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.295627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.295668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.295913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.295929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.300308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.300354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.300392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.300429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.300723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.300776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.300815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.933 [2024-07-26 13:33:35.300872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.300914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.301166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.301183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.303376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.303420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.303462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.303500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.303836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.303884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.303923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.303960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.303998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.304300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.304316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.308781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.308828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.308865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.308911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.309164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.309212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.309257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.309298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.309337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.309639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.309655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.311558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.311601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.311639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.311678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.312095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.312146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.312187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.312233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.312271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.312518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.312535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.316211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.316258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.316296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.316333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.316580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.316632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.316671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.316710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.316747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.317097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.317114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.318954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.319328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.319371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.320028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.320288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.320344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.320388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.320426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.320463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.320710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.320726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.324958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.325919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.325982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.327578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.328065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.328118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.328481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.328524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.329826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.330260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.330277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.331846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.331892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.333262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.333306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.333587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.333637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.334886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.334929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.336404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.336659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.336675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.339537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.339584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.340307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.340349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.340644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.340698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.342179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.342222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.934 [2024-07-26 13:33:35.342259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.342505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.342524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.347968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.348019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.348388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.348430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.348827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.350438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.350483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.350520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.350875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.351226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.351243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.356779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.356832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.358314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.358356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.358699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.358753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.358806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.360331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.360373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.360826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.360843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.365326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.365378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.365417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.366551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.366817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.368072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.368116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.369618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.369668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.369919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.369935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.372883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.373642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.373687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.374926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.375186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.376685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.376729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.376768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.377689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.377942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.377958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.381787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.383415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.383777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.384410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.384666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.386215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.387852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.389423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.390425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.390723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.390739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.396099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.396807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.397174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.398713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.399026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.400525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.402011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.402693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.404030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.404286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.404303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.410249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.410620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.411636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.412901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.935 [2024-07-26 13:33:35.413157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.414651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.415832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.417229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.418475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.418727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.418743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.422629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.423040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.424405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.425907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.426165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.427670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.428458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.429705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.431188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.431439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.431455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.435024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.436390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.437633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.439117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.439375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.440205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.441713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.443360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.444925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.445186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.445202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.449559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.450804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.452292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:54.936 [2024-07-26 13:33:35.453772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.454077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.455405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.456648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.458120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.459600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.460039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.460056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.466685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.468228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.469695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.471084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.471393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.472642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.474103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.475579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.476628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.476887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.476903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.481554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.483043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.483779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.485283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.485541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.487043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.487602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.488973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.489336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.489727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.489744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.495606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.496616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.497850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.499343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.499597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.500776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.502046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.502548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.502910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.503168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.503185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.507309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.508824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.510455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.511983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.512240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.512800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.514028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.514390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.515272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.515546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.515562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.519673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.520934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.522445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.523925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.524194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.524985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.196 [2024-07-26 13:33:35.525972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.526337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.527420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.527735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.527752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.533330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.533705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.534067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.534443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.534824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.536493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.536857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.537318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.538639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.539069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.539086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.544205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.544577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.544939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.545309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.545633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.546818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.547182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.548107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.548959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.549376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.549394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.553433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.553802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.554170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.554535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.554791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.555594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.555955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.557271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.557738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.558144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.558163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.561490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.561856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.562231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.562598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.562849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.563407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.563768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.565350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.565718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.566136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.566159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.568937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.569314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.569679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.570113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.570376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.570747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.571108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.572551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.572909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.573296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.573315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.576045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.576425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.576793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.577501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.577760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.578132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.578858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.579920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.580289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.580662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.580679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.583378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.583747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.584112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.585356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.585742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.586114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.587231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.587903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.588264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.588605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.588623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.591275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.591643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.592012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.593297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.593662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.594032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.595119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.197 [2024-07-26 13:33:35.595808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.596171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.596502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.596520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.599173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.599540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.599904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.601183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.601560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.601930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.603029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.603695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.604056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.604414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.604432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.607093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.607473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.607838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.609127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.609503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.609875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.610982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.611649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.612012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.612370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.612392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.615061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.615437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.615804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.617182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.617572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.617942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.619115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.619719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.620077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.620427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.620444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.623115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.623492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.623858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.625128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.625503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.625873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.626960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.627643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.628001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.628326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.628344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.630998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.631372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.631737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.632993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.633392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.633764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.634826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.635543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.635909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.636247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.636265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.638912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.638964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.639795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.639841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.640162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.641032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.641928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.643406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.643777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.644193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.644211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.648590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.648642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.648999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.649054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.649313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.649683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.649726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.650095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.650166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.650676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.650694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.654747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.655132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.655186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.656759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.657239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.657649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.657711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.658717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.198 [2024-07-26 13:33:35.658771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.659214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.659234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.662231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.662600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.662647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.663994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.664254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.665742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.665795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.667375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.668379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.668665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.668682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.673115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.674369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.674412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.674780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.675196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.675245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.676813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.678235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.678277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.678528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.678544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.683103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.684113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.684162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.684749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.685168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.686230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.686944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.686987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.687349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.687602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.687618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.691014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.692666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.694189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.694232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.694520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.694570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.695525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.695566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.695921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.696201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.696218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.700853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.700903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.702501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.702545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.702796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.702846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.704377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.705995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.706045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.706422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.706440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.710442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.710493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.710531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.710568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.710864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.710916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.710956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.710993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.711032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.711284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.711301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.715856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.715904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.715943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.715982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.716242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.716291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.716330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.716368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.716406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.716841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.716859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.719205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.719251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.719304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.719343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.719591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.719649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.719690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.719728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.719766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.199 [2024-07-26 13:33:35.720054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.200 [2024-07-26 13:33:35.720071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.460 [2024-07-26 13:33:35.724524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.460 [2024-07-26 13:33:35.724579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.460 [2024-07-26 13:33:35.724617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.460 [2024-07-26 13:33:35.724655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.460 [2024-07-26 13:33:35.725053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.460 [2024-07-26 13:33:35.725101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.460 [2024-07-26 13:33:35.725149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.725188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.725228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.725538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.725554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.729449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.729495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.729536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.729573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.729897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.729950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.729992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.730030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.730068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.730339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.730356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.734439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.734486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.734528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.734567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.734821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.734873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.734911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.734953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.734998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.735463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.735484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.738956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.739003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.739040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.739078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.739373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.739426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.739465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.739503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.739541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.739784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.739801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.743328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.743380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.743419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.743457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.743887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.743934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.743975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.744014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.744052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.744388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.744405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.748338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.748385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.748425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.748463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.748716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.748769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.748807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.748845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.748893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.749146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.749162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.752762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.752811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.752850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.752888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.753147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.753199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.753238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.753275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.753320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.753570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.753586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.757856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.757911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.757948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.757986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.758242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.758296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.758334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.758373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.758412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.758661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.758678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.761329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.761387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.461 [2024-07-26 13:33:35.761434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.761493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.761743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.761788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.761835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.761875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.761912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.762162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.762179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.766575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.766622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.766677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.766718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.766967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.767020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.767062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.767101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.767143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.767561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.767578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.771107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.771158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.771196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.771234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.771478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.771534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.771572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.771610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.771648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.772016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.772032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.776309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.776356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.776395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.776434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.776839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.776885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.776925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.776963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.777017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.777270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.777287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.780848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.780903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.780942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.780981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.781280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.781330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.781368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.781406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.781443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.781736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.781752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.786432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.786480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.786521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.786558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.786873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.786924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.786963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.787003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.787046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.787442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.787460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.792180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.792231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.792268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.792305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.792551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.792607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.792646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.792684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.792721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.792965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.792981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.796365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.796417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.796455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.796492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.796741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.796793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.796831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.796869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.796907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.797156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.462 [2024-07-26 13:33:35.797173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.801104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.801155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.801194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.801232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.801638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.801685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.801728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.801767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.801806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.802241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.802258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.805658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.805704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.805745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.805783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.806063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.806116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.806160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.806198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.806235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.806482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.806499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.810579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.810626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.810663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.810701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.810986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.811042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.811081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.811119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.811160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.811410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.811426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.816065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.816119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.816163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.816207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.816605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.816651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.816692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.816730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.816769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.817172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.817189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.820768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.820822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.820860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.820899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.821218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.821269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.821307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.821344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.821383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.821669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.821685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.825555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.826467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.826511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.827752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.828006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.828062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.828101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.828143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.828182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.828428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.828445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.832393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.832764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.832808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.833167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.833575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.833625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.834957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.834999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.836334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.836586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.836602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.841164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.841213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.841570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.841612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.842019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.842066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.463 [2024-07-26 13:33:35.842428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.842470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.842825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.843075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.843091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.846662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.846710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.848199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.848240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.848520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.848574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.848934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.848976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.849015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.849433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.849450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.853623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.853676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.854921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.854964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.855218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.856726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.856770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.856807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.857690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.858149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.858170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.863403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.863454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.864337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.864389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.864682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.864737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.864776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.866267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.866309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.866558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.866574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.871333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.871391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.871433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.872905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.873159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.874004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.874047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.875289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.875332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.875582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.875598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.879753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.881011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.881054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.882538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.882789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.884330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.884375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.884413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.885810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.886094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.886111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.890261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.891626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.893135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.894625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.894875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.895680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.896937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.898420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.899904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.900185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.900202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.904223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.905722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.906412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.907668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.907917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.909518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.910988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.911350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.911708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.912100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.912116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.916637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.918020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.918654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.919012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.464 [2024-07-26 13:33:35.919413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.919779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.920243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.921552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.923030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.923284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.923300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.928910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.929621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.929981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.931364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.931770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.932145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.933544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.934791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.936278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.936528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.936544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.940791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.941169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.941538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.942595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.942923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.944427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.945910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.947040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.948504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.948813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.948829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.953092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.954392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.955778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.957295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.957547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.958168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.959817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.961295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.962883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.963134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.963154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.966537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.966901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.967263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.967622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.967982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.968359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.968728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.969086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.969447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.969863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.969883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.973323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.973693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.974055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.974418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.974862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.975235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.975607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.975973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.976335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.976712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.976729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.979903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.980282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.980648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.981012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.981428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.981792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.982154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.465 [2024-07-26 13:33:35.982519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.466 [2024-07-26 13:33:35.982886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.466 [2024-07-26 13:33:35.983361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.466 [2024-07-26 13:33:35.983378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.986819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.987189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.987551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.987914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.988306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.988674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.989032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.989396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.989760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.990100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.990116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.993347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.993715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.994075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.994441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.994830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.995206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.995567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.995924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.996286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.996687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.996703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:35.999947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.000318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.000678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.001036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.001446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.001814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.002182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.002542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.002904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.003395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.003413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.005943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.006314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.006676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.007040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.007403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.007773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.008134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.008498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.008861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.009214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.009231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.011815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.012185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.012547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.012904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.013346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.013710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.014069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.014437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.014810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.015229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.015247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.017940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.018303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.018662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.019023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.019438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.019809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.020173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.020526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.020883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.021299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.021316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.023745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.024107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.024475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.024842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.728 [2024-07-26 13:33:36.025248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.025613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.025970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.026333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.026698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.027117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.027134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.029688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.030068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.030429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.030787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.031184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.032644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.033009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.034568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.034937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.035380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.035398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.037856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.038223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.038582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.038942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.039259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.039628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.039989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.040350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.040706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.041115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.041132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.043682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.044049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.044419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.044787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.045213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.045580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.045938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.046617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.047865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.048115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.048132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.051109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.052599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.053657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.054018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.054448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.054814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.055175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.056837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.058319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.058571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.058587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.061666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.063273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.063635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.063993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.064367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.064733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.065845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.067098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.068590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.068845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.068861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.071836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.072353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.072722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.073080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.073520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.074253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.075497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.076983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.078465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.078716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.078733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.081252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.081618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.081976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.729 [2024-07-26 13:33:36.082337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.082748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.084365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.085837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.087397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.089056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.089415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.089431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.091160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.091521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.091878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.092239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.092494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.093753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.095246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.096730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.097411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.097665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.097681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.099508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.099558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.099913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.099953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.100359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.101240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.102490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.103983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.105467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.105772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.105789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.108046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.108098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.108468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.108511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.108897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.109271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.109313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.110225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.110267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.110578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.110594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.113284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.114751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.114796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.116270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.116615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.116992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.117034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.117393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.117435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.117872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.117889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.120872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.122053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.122097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.123337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.123589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.125098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.125147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.125808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.126170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.126581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.126599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.128521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.130012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.130055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.131234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.131512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.131562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.132812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.134298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.134341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.134588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.134604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.136800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.137438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.137485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.138721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.138972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.140480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.141857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.141900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.143273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.143571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.730 [2024-07-26 13:33:36.143587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.145332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.145694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.146052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.146093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.146474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.146524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.147764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.147808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.149281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.149532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.149548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.152552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.152598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.153402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.153445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.153922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.153969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.154329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.154687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.154729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.155106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.155122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.156538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.156581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.156619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.156658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.156908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.156956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.156994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.157032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.157076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.157326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.157343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.159129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.159176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.159214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.159252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.159652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.159698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.159737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.159775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.159813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.160227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.160247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.161673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.161715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.161752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.161790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.162155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.162210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.162248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.162286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.162327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.162620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.162636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.164255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.164299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.164337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.164376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.164787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.164834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.164885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.164924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.164961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.165412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.165429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.167007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.167058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.167098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.167136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.167389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.167441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.167480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.167519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.167558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.167806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.167822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.731 [2024-07-26 13:33:36.169362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.169405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.169442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.169480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.169902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.169951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.169997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.170037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.170075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.170464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.170481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.172249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.172291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.172328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.172366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.172607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.172658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.172697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.172734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.172774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.173156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.173172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.174572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.174614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.174653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.174691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.175068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.175114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.175158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.175196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.175236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.175635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.175652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.177670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.177729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.177775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.177816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.178058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.178112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.178155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.178193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.178231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.178527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.178544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.179968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.180011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.180048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.180085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.180437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.180494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.180533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.180571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.180608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.181004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.181021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.182946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.182988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.183025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.183063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.183309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.183361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.183400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.183438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.183476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.183720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.183737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.185289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.185347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.185383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.185420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.185670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.185723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.185762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.185801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.185839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.186276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.186292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.188345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.188388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.188435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.188472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.188716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.188768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.188810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.188848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.732 [2024-07-26 13:33:36.188885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.189127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.189147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.190704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.190746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.190783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.190820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.191063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.191118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.191161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.191199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.191242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.191590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.191610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.193768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.193809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.193846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.193884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.194172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.194223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.194262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.194300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.194337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.194579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.194595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.196148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.196199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.196236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.196274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.196517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.196569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.196607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.196646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.196684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.196977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.196994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.199317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.199370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.199411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.199448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.199695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.199746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.199790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.199831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.199869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.200114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.200130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.201666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.201710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.201748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.201785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.202032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.202087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.202125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.202171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.202217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.202463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.202479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.204778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.204822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.204864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.733 [2024-07-26 13:33:36.204903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.205153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.205203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.205242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.205286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.205324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.205573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.205589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.207126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.207175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.207212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.207250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.207496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.207552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.207591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.207630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.207667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.207910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.207927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.210104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.210151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.210193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.210232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.210570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.210619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.210658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.210695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.210732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.211019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.211035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.212535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.212577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.212620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.212658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.212907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.212957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.212996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.213054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.213095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.213342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.213358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.215313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.215354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.215395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.215434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.215852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.215899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.215939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.215977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.216018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.216271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.216287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.217786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.217827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.217864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.217901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.218188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.218241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.218279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.218317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.218354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.218600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.218616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.220501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.220864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.220905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.221357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.221608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.221663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.221703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.221746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.221783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.222032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.222049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.223592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.225054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.225097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.226578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.226869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.226929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.227293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.227335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.227691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.228136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.228157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.734 [2024-07-26 13:33:36.229727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.229768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.231412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.231455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.231794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.231844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.233087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.233129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.234613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.234863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.234879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.236979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.237022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.237384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.237427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.237676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.237730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.239328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.239380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.239421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.239667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.239682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.242692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.242738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.243793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.243835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.244292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.244658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.244700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.244752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.245105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.245523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.245541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.248495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.248550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.250047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.250090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.250341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.735 [2024-07-26 13:33:36.250395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.996 [2024-07-26 13:33:36.250436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.250795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.250837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.251258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.251275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.254564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.254613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.254650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.256137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.256498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.258008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.258063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.259535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.259576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.259821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.259838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.261884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.262250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.262293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.263728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.264010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.265516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.265560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.265597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.267079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.267521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.267538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.269413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.269774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.270132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.270495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.270828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.272062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.273539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.275017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.276167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.276433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.276448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.278254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.278615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.278985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.279381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.279634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.280956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.282473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.284029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.284533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.284782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.284798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.286658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.287021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.287383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.287746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.288070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.288446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.288803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.289165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.289524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.289880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.289896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.292404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.292767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.293132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.293512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.293935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.294305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.294665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.295030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.295398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.295828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.295845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.298344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.298715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.299073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.299438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.299827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.300203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.300569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.300928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.301293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.301705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.301722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.304134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.304499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.304858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.305224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.305628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.305996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.997 [2024-07-26 13:33:36.306358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.306715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.307074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.307410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.307427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.310184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.310553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.310921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.311283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.311752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.312115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.312484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.312846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.313210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.313611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.313632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.316114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.316478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.316837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.317203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.317645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.318019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.318384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.318741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.319097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.319464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.319482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.321985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.322359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.322724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.323083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.323503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.323867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.324229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.324588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.324955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.325408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.325426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.327932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.328301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.328659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.329018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.329430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.329804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.330174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.330537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.330892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.331324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.331342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.333791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.334157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.334518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.334884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.335240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.335607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.335964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.336324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.336684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.337055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.337071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.339629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.339996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.340361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.340720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.341111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.341479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.341841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.342207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.342569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.342978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.342995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.345544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.345908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.346270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.346631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.346974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.347352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.347725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.348082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.348446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.348841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.348857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.351220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.351585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.353211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.353662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.353910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.354289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.998 [2024-07-26 13:33:36.354648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.355006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.355365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.355677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.355694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.358186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.358550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.358914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.359277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.359692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.360056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.360418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.360786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.361155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.361579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.361595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.364159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.364521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.364882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.365243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.365627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.365997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.366369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.366731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.367092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.367511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.367528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.370648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.371612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.372857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.374344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.374592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.375852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.376217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.376575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.376936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.377347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.377365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.379547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.381048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.382699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.384226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.384485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.384855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.385219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.385579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.385938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.386280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.386296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.388589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.389835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.391321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.392805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.393110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.393485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.393844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.394208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.394566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.394815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.394831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.397800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.399462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.401025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.402496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.402865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.403242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.403604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.403965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.404967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.405265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.405282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.407971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.409506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.411000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.411665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.412125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.412498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.412854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.413350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.414614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.414872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.414888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.417913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.419402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.420679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.421040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.421472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.421841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.422205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:55.999 [2024-07-26 13:33:36.423382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.424628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.424882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.424897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.427837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.429334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.429971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.430333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.430761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.431145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.431607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.432915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.434396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.434647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.434663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.437692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.439127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.439492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.439853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.440243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.440614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.441683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.442918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.444404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.444655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.444671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.447666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.447713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.448333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.448375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.448791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.449159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.449517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.450195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.451439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.451689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.451706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.454703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.454752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.456235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.456278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.456566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.456935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.456983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.457342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.457396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.457786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.457804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.460894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.461599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.461644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.462887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.463148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.464768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.464820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.466226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.466267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.466664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.466681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.469992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.471464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.471507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.472976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.473355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.474877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.474927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.476487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.478125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.478379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.478396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.480632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.481251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.481293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.482526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.482778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.482832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.484321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.485540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.485583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.485834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.485850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.487320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.487687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.487729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.488084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.488519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.488907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.000 [2024-07-26 13:33:36.490299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.490345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.491825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.492077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.492093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.493632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.495117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.496223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.496265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.496652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.496710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.497071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.497113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.497489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.497901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.497917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.500145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.500194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.501577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.501621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.501870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.501923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.503454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.504867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.504910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.505302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.505323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.507486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.507528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.507565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.507605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.507874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.507925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.507964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.508002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.508047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.508304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.508321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.509851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.509896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.509933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.509971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.510223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.510275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.510313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.510350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.510388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.510688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.510705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.512929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.512972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.513011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.513052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.513301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.513354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.513394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.513441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.513480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.513728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.513744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.515270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.515312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.515349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.515386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.001 [2024-07-26 13:33:36.515629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.515682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.515720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.515759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.515795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.516041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.516057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.518155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.518198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.518240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.518281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.518643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.518692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.518731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.518769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.518806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.519093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.002 [2024-07-26 13:33:36.519109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.520596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.520638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.520676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.520714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.520990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.521051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.521090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.521128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.521177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.521424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.521441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.523342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.523385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.523422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.523459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.523870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.264 [2024-07-26 13:33:36.523915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.523955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.523994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.524033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.524308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.524325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.525772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.525815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.525853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.525893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.526146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.526198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.526237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.526291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.526329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.526573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.526589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.528403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.528445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.528491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.528530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.528899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.528945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.528983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.529022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.529060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.529475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.529496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.530895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.530936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.530973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.531011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.531386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.531445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.531484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.531522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.531559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.531864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.531880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.533433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.533475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.533515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.533553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.533969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.534014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.534054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.534094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.534155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.534571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.534588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.536240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.536282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.536319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.536362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.536609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.536655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.536695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.536740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.536779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.537063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.537079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.538515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.538557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.538598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.538636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.539059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.539108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.539152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.539191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.539234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.539635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.539651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.541518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.541568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.541609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.541646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.541889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.541944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.541982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.542020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.542057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.542407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.542425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.543831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.543872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.543909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.543946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.544332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.544390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.544432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.265 [2024-07-26 13:33:36.544470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.544507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.544906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.544924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.546853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.546899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.546936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.546974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.547225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.547278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.547316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.547354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.547391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.547635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.547652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.549191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.549233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.549273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.549311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.549559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.549610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.549652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.549698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.549736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.550109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.550125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.552247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.552289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.552326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.552364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.552642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.552707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.552746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.552783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.552821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.553063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.553079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.554613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.554664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.554705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.554742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.554987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.555040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.555078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.555116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.555158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.555438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.555453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.557525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.557570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.557608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.557647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.557898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.557945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.557983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.558028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.558068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.558319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.558336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.559873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.559918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.559958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.559996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.560249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.560301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.560341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.560379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.560416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.560663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.560679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.562738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.562781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.562819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.562857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.563275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.563329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.563369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.563407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.563444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.563769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.563785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.565288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.565333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.565370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.565407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.565687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.565739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.565778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.565816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.266 [2024-07-26 13:33:36.565853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.566097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.566113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.568169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.568224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.568263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.568301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.568739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.568785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.568827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.568866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.568904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.569243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.569260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.570780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.572042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.572086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.573454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.573759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.573817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.573856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.573907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.573945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.574383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.574407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.576374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.577994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.578040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.579517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.579770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.579828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.580899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.580941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.582188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.582442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.582457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.584354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.584397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.584771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.584817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.585227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.585276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.586871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.586913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.588496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.588749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.588765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.590329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.590374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.591863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.591907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.592167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.592222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.592588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.592634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.592673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.593091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.593108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.596510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.596557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.598034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.598077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.598521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.599875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.599919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.599957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.601438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.601690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.601707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.604122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.604176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.605579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.605621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.605911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.605965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.606004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.607484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.607528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.607776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.607792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.610941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.610997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.611036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.611402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.611828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.612205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.612250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.612607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.612649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.267 [2024-07-26 13:33:36.612900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.612916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.614488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.616047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.616090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.617645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.617900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.619541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.619595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.619634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.619988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.620430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.620450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.622853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.623220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.623580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.623936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.624267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.624637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.625000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.625365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.625725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.626161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.626179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.628658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.629020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.629393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.629762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.630223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.630590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.630955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.631323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.631686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.632057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.632073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.634922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.635295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.635658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.636020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.636454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.636820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.637188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.637553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.637911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.638302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.638319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.640838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.641209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.641570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.641934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.642330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.642763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.643194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.643613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.644034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.644542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.644568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.647511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.647949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.648379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.648795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.649266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.649699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.650121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.650543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.650968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.651448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.651474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.654357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.654800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.655231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.655652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.656134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.656572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.656992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.657421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.657812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.658224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.658244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.660669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.661036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.661406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.661775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.662122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.662501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.662863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.663228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.663592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.663960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.268 [2024-07-26 13:33:36.663979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.666590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.666963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.667337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.667697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.668072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.668441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.668803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.669172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.669535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.669944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.669961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.672367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.672734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.673093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.673452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.673789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.674165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.674529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.674887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.675257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.675680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.675698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.678192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.678555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.678916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.679283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.679676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.680044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.680417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.680778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.681144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.681498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.681515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.684111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.684483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.684859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.685225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.685665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.686035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.687298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.687799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.689352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.689821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.689837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.692495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.692865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.693232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.693594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.694023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.694395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.694761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.695131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.695495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.695858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.695875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.698333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.698702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.699063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.699428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.699806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.700187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.700548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.700907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.701277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.701679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.701697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.704237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.705157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.706393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.707873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.708127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.709404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.710751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.269 [2024-07-26 13:33:36.712003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.713495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.713747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.713763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.716187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.717752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.719152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.720493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.720745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.721572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.723069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.724720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.726267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.726520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.726537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.729413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.730669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.732130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.733608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.733863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.734931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.736166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.737647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.739133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.739483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.739500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.742961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.744207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.745676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.747154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.747492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.749079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.750513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.751988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.753602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.753972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.753989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.757441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.758934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.760430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.761627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.761910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.763154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.764642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.766130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.766923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.767381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.767404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.770623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.772112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.773598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.774295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.774549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.776019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.777588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.779247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.779614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.780034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.780051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.783469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.784341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.785809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.786473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.270 [2024-07-26 13:33:36.786727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.788051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.789530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.791015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.791384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.791812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.791830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.795233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.796837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.798457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.799441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.799732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.801211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.802686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.803931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.804308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.804731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.804749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.808097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.809590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.810351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.811806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.812060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.813544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.815020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.815389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.815750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.816133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.816156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.819497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.821091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.821993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.823248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.823502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.824998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.826161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.826526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.532 [2024-07-26 13:33:36.826887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.827283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.827300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.830513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.831223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.832622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.834162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.834416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.835919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.836293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.836651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.837011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.837427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.837444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.840601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.840650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.841797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.841840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.842127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.843618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.845092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.845973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.846358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.846778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.846796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.850199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.850248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.851726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.851768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.852228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.853589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.853634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.855112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.855159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.855408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.855424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.857676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.858639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.858686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.859928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.860187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.861695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.861741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.862589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.862634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.862882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.862898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.864766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.865131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.865182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.865538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.865891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.867178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.867224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.868702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.870180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.870431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.870447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.871927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.872961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.873006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.873375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.873786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.873838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.874212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.874572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.874616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.874865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.874881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.876457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.877707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.877751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.879213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.879465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.880770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.881133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.881184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.881540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.881941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.881957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.883683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.885180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.885885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.885928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.886181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.886234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.533 [2024-07-26 13:33:36.887838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.887883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.889358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.889611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.889628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.892741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.892791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.894395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.894459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.894757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.894831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.896412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.898127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.898195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.898501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.898525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.900825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.900883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.900932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.900982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.901445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.901515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.901567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.901620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.901668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.902013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.902037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.903640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.903690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.903738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.903781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.904042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.904093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.904155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.904200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.904241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.904498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.904517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.906563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.906609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.906652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.906694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.907117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.907174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.907237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.907278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.907319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.907584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.907603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.909207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.909254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.909294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.909334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.909623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.909677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.909719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.909760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.909811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.910069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.910087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.912053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.912099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.912146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.912184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.912573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.912621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.912661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.912700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.912738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.912999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.913015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.914578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.914626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.914664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.914701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.914964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.915022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.915064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.915102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.915146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.915394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.915410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.917242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.917284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.917323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.917365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.917749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.917806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.917846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.534 [2024-07-26 13:33:36.917886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.917925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.918339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.918357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.919820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.919862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.919908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.919946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.920246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.920296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.920335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.920373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.920409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.920691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.920707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.922350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.922394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.922442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.922481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.922885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.922946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.922997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.923037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.923074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.923488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.923505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.925086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.925129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.925172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.925210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.925513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.925568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.925609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.925648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.925689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.925936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.925952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.927464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.927518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.927557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.927594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.927998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.928044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.928084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.928123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.928167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.928508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.928529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.930338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.930382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.930419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.930456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.930704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.930755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.930794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.930832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.930869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.931265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.931282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.932700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.932751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.932794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.932832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.933207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.933263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.933303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.933342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.933383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.933777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.933794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.935657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.935701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.935738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.935775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.936023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.936077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.936116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.936182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.936228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.936473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.936489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.937987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.938029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.938065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.938103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.938419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.938472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.938511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.938559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.535 [2024-07-26 13:33:36.938610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.939064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.939081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.941108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.941162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.941204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.941242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.941485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.941538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.941576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.941614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.941651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.941894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.941910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.943456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.943498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.943538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.943576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.943823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.943878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.943916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.943954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.943998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.944334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.944351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.946687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.946729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.946766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.946803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.947050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.947106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.947151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.947189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.947226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.947471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.947487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.949021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.949063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.949101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.949146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.949559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.949615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.949656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.949694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.949732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.950150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.950168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.952083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.952126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.952169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.952210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.952456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.952507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.952546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.952583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.952621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.952867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.952883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.954427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.954484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.954527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.954565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.954811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.954863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.954902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.955025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.955063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.955401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.955418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.957572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.957614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.957652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.957689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.957969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.958024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.958064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.958102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.958145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.958393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.958409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.959929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.959987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.960026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.960064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.960313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.960364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.960403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.960440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.960478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.960784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.536 [2024-07-26 13:33:36.960801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.963112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.963163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.963215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.963256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.963504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.963556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.963596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.963638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.963675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.963921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.963936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.965473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.966971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.967016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.968486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.968779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.968835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.968875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.968926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.968981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.969441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.969458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.971472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.973035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.973095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.974576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.974829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.974888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.975877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.975920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.977165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.977416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.977431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.979266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.979312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.979671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.979717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.980131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.980185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.981709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.981759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.983315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.983566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.983582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.985237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.985292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.986777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.986819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.987066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.987123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.987490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.987538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.987577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.987988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.988004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.990446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.990497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.990852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.990906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.991366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.991739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.991784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.991835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.992201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.992642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.992659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.995363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.995416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.537 [2024-07-26 13:33:36.995779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:36.995824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:36.996261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:36.996308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:36.996348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:36.996707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:36.996752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:36.997172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:36.997190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:36.999695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:36.999744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:36.999782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.000136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.000483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.000853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.000899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.001265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.001312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.001726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.001743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.003900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.004271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.004319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.004675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.005106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.005483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.005528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.005592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.005951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.006300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.006317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.009460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.009830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.010203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.010564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.010996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.011370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.011737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.012099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.012464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.012869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.012886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.015508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.015876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.016247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.016611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.016978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.017357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.017717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.018077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.018441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.018832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.018849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.021452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.021822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.022196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.022564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.022963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.023340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.023702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.024063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.024434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.024771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:56.538 [2024-07-26 13:33:37.024787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:33:57.105 00:33:57.105 Latency(us) 00:33:57.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:57.105 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:57.105 Verification LBA range: start 0x0 length 0x100 00:33:57.105 crypto_ram : 5.87 43.65 2.73 0.00 0.00 2842371.69 285212.67 2348810.24 00:33:57.105 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:57.105 Verification LBA range: start 0x100 length 0x100 00:33:57.105 crypto_ram : 5.91 43.29 2.71 0.00 0.00 2877109.04 268435.46 2483027.97 00:33:57.105 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:57.105 Verification LBA range: start 0x0 length 0x100 00:33:57.106 crypto_ram1 : 5.87 43.64 2.73 0.00 0.00 2750103.55 285212.67 2160905.42 00:33:57.106 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:57.106 Verification LBA range: start 0x100 length 0x100 00:33:57.106 crypto_ram1 : 5.92 43.28 2.70 0.00 0.00 2784644.30 268435.46 2308544.92 00:33:57.106 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:57.106 Verification LBA range: start 0x0 length 0x100 00:33:57.106 crypto_ram2 : 5.58 297.76 18.61 0.00 0.00 388023.18 78852.92 617401.55 00:33:57.106 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:57.106 Verification LBA range: start 0x100 length 0x100 00:33:57.106 crypto_ram2 : 5.57 280.41 17.53 0.00 0.00 409137.36 14155.78 637534.21 00:33:57.106 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:33:57.106 Verification LBA range: start 0x0 length 0x100 00:33:57.106 crypto_ram3 : 5.69 310.01 19.38 0.00 0.00 361382.09 47185.92 459695.72 00:33:57.106 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:33:57.106 Verification LBA range: start 0x100 length 0x100 00:33:57.106 crypto_ram3 : 5.68 293.07 18.32 0.00 0.00 382604.46 48863.64 335544.32 00:33:57.106 =================================================================================================================== 00:33:57.106 Total : 1355.10 84.69 0.00 0.00 708593.06 14155.78 2483027.97 00:33:57.673 00:33:57.673 real 0m8.950s 00:33:57.673 user 0m17.008s 00:33:57.673 sys 0m0.435s 00:33:57.673 13:33:37 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:57.673 13:33:37 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:33:57.673 ************************************ 00:33:57.673 END TEST bdev_verify_big_io 00:33:57.673 ************************************ 00:33:57.673 13:33:37 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:57.673 13:33:37 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:33:57.673 13:33:37 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:57.673 13:33:37 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:33:57.673 ************************************ 00:33:57.673 START TEST bdev_write_zeroes 00:33:57.673 ************************************ 00:33:57.673 13:33:37 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:33:57.673 [2024-07-26 13:33:38.034779] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:33:57.673 [2024-07-26 13:33:38.034836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904853 ] 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:57.673 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.673 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:57.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:57.674 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:57.674 [2024-07-26 13:33:38.165576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.933 [2024-07-26 13:33:38.249867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.933 [2024-07-26 13:33:38.271122] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:33:57.933 [2024-07-26 13:33:38.279146] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:33:57.933 [2024-07-26 13:33:38.287164] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:33:57.933 [2024-07-26 13:33:38.392566] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:34:00.466 [2024-07-26 13:33:40.561618] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:34:00.466 [2024-07-26 13:33:40.561685] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:34:00.466 [2024-07-26 13:33:40.561700] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:00.466 [2024-07-26 13:33:40.569637] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:34:00.466 [2024-07-26 13:33:40.569655] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:34:00.466 [2024-07-26 13:33:40.569666] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:00.466 [2024-07-26 13:33:40.577656] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:34:00.466 [2024-07-26 13:33:40.577672] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:34:00.466 [2024-07-26 13:33:40.577683] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:00.466 [2024-07-26 13:33:40.585676] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:34:00.466 [2024-07-26 13:33:40.585698] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:34:00.466 [2024-07-26 13:33:40.585709] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:34:00.466 Running I/O for 1 seconds... 00:34:01.407 00:34:01.407 Latency(us) 00:34:01.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.407 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:01.407 crypto_ram : 1.02 2182.12 8.52 0.00 0.00 58286.29 5138.02 70464.31 00:34:01.407 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:01.407 crypto_ram1 : 1.02 2195.28 8.58 0.00 0.00 57681.81 5111.81 65011.71 00:34:01.407 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:01.407 crypto_ram2 : 1.02 16850.53 65.82 0.00 0.00 7494.85 2254.44 9856.61 00:34:01.407 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:01.407 crypto_ram3 : 1.02 16882.62 65.95 0.00 0.00 7458.47 2254.44 7811.89 00:34:01.407 =================================================================================================================== 00:34:01.407 Total : 38110.55 148.87 0.00 0.00 13301.08 2254.44 70464.31 00:34:01.666 00:34:01.666 real 0m4.043s 00:34:01.666 user 0m3.681s 00:34:01.666 sys 0m0.318s 00:34:01.666 13:33:42 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:01.666 13:33:42 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:01.666 ************************************ 00:34:01.666 END TEST bdev_write_zeroes 00:34:01.666 ************************************ 00:34:01.666 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:01.666 13:33:42 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:34:01.666 13:33:42 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:01.666 13:33:42 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:01.666 ************************************ 00:34:01.666 START TEST bdev_json_nonenclosed 00:34:01.666 ************************************ 00:34:01.666 13:33:42 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:01.666 [2024-07-26 13:33:42.160774] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:01.666 [2024-07-26 13:33:42.160830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905596 ] 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.924 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:01.924 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.925 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:01.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.925 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:01.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.925 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:01.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.925 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:01.925 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:01.925 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:01.925 [2024-07-26 13:33:42.291605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.925 [2024-07-26 13:33:42.373703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.925 [2024-07-26 13:33:42.373767] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:01.925 [2024-07-26 13:33:42.373783] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:01.925 [2024-07-26 13:33:42.373793] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:02.184 00:34:02.184 real 0m0.355s 00:34:02.184 user 0m0.208s 00:34:02.184 sys 0m0.145s 00:34:02.184 13:33:42 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:02.184 13:33:42 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:02.184 ************************************ 00:34:02.184 END TEST bdev_json_nonenclosed 00:34:02.184 ************************************ 00:34:02.184 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:02.184 13:33:42 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:34:02.184 13:33:42 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:02.184 13:33:42 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:02.184 ************************************ 00:34:02.184 START TEST bdev_json_nonarray 00:34:02.184 ************************************ 00:34:02.184 13:33:42 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:02.184 [2024-07-26 13:33:42.592015] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:02.184 [2024-07-26 13:33:42.592068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid905673 ] 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:02.184 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:02.184 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:02.443 [2024-07-26 13:33:42.725425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.443 [2024-07-26 13:33:42.808314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.443 [2024-07-26 13:33:42.808385] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:02.443 [2024-07-26 13:33:42.808401] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:02.443 [2024-07-26 13:33:42.808412] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:02.443 00:34:02.443 real 0m0.348s 00:34:02.443 user 0m0.205s 00:34:02.443 sys 0m0.141s 00:34:02.443 13:33:42 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:02.443 13:33:42 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:02.443 ************************************ 00:34:02.443 END TEST bdev_json_nonarray 00:34:02.443 ************************************ 00:34:02.443 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@786 -- # [[ crypto_qat == bdev ]] 00:34:02.443 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@793 -- # [[ crypto_qat == gpt ]] 00:34:02.443 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@797 -- # [[ crypto_qat == crypto_sw ]] 00:34:02.443 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:34:02.443 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # cleanup 00:34:02.443 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:34:02.443 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:34:02.443 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:34:02.443 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:34:02.443 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:34:02.443 13:33:42 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:34:02.443 00:34:02.443 real 1m10.095s 00:34:02.443 user 2m52.476s 00:34:02.443 sys 0m8.511s 00:34:02.443 13:33:42 blockdev_crypto_qat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:02.443 13:33:42 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:34:02.443 ************************************ 00:34:02.443 END TEST blockdev_crypto_qat 00:34:02.443 ************************************ 00:34:02.702 13:33:42 -- spdk/autotest.sh@364 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:34:02.702 13:33:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:02.702 13:33:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:02.702 13:33:42 -- common/autotest_common.sh@10 -- # set +x 00:34:02.702 ************************************ 00:34:02.702 START TEST chaining 00:34:02.702 ************************************ 00:34:02.702 13:33:43 chaining -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:34:02.702 * Looking for test storage... 00:34:02.702 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:34:02.702 13:33:43 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@7 -- # uname -s 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:02.702 13:33:43 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:02.702 13:33:43 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:02.702 13:33:43 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:02.702 13:33:43 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.702 13:33:43 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.702 13:33:43 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.702 13:33:43 chaining -- paths/export.sh@5 -- # export PATH 00:34:02.702 13:33:43 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@47 -- # : 0 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:02.702 13:33:43 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:34:02.702 13:33:43 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:34:02.702 13:33:43 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:34:02.702 13:33:43 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:34:02.702 13:33:43 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:34:02.702 13:33:43 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:34:02.702 13:33:43 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:02.703 13:33:43 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:02.703 13:33:43 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:02.703 13:33:43 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:02.703 13:33:43 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:02.703 13:33:43 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:02.703 13:33:43 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:02.703 13:33:43 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.703 13:33:43 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:34:02.703 13:33:43 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:02.703 13:33:43 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:34:02.703 13:33:43 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@296 -- # e810=() 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@297 -- # x722=() 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@298 -- # mlx=() 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:34:10.812 Found 0000:20:00.0 (0x8086 - 0x159b) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:34:10.812 Found 0000:20:00.1 (0x8086 - 0x159b) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:34:10.812 Found net devices under 0000:20:00.0: cvl_0_0 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:34:10.812 Found net devices under 0000:20:00.1: cvl_0_1 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:10.812 13:33:50 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:10.812 13:33:51 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:10.812 13:33:51 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:10.812 13:33:51 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:10.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:10.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:34:10.812 00:34:10.812 --- 10.0.0.2 ping statistics --- 00:34:10.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.812 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:34:10.812 13:33:51 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:10.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:10.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:34:10.812 00:34:10.812 --- 10.0.0.1 ping statistics --- 00:34:10.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:10.812 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:34:10.812 13:33:51 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:10.812 13:33:51 chaining -- nvmf/common.sh@422 -- # return 0 00:34:10.812 13:33:51 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:10.812 13:33:51 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.813 13:33:51 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:10.813 13:33:51 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:10.813 13:33:51 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.813 13:33:51 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:10.813 13:33:51 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:10.813 13:33:51 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:34:10.813 13:33:51 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:10.813 13:33:51 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:10.813 13:33:51 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:10.813 13:33:51 chaining -- nvmf/common.sh@481 -- # nvmfpid=909718 00:34:10.813 13:33:51 chaining -- nvmf/common.sh@482 -- # waitforlisten 909718 00:34:10.813 13:33:51 chaining -- common/autotest_common.sh@831 -- # '[' -z 909718 ']' 00:34:10.813 13:33:51 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.813 13:33:51 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:10.813 13:33:51 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.813 13:33:51 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:10.813 13:33:51 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:10.813 13:33:51 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:10.813 [2024-07-26 13:33:51.231636] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:10.813 [2024-07-26 13:33:51.231699] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:10.813 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:10.813 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:11.072 [2024-07-26 13:33:51.359658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.072 [2024-07-26 13:33:51.445583] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.072 [2024-07-26 13:33:51.445628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.072 [2024-07-26 13:33:51.445641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.072 [2024-07-26 13:33:51.445653] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.072 [2024-07-26 13:33:51.445663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.072 [2024-07-26 13:33:51.445689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.640 13:33:52 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:11.640 13:33:52 chaining -- common/autotest_common.sh@864 -- # return 0 00:34:11.640 13:33:52 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:11.640 13:33:52 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:11.640 13:33:52 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:11.899 13:33:52 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@69 -- # mktemp 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.dzq7eGxWJE 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@69 -- # mktemp 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.8ZpSm8e025 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:11.899 malloc0 00:34:11.899 true 00:34:11.899 true 00:34:11.899 [2024-07-26 13:33:52.223845] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:34:11.899 crypto0 00:34:11.899 [2024-07-26 13:33:52.231869] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:34:11.899 crypto1 00:34:11.899 [2024-07-26 13:33:52.239990] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:11.899 [2024-07-26 13:33:52.256191] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@85 -- # update_stats 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:11.899 13:33:52 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:11.899 13:33:52 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:12.158 13:33:52 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.158 13:33:52 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:34:12.158 13:33:52 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.dzq7eGxWJE bs=1K count=64 00:34:12.158 64+0 records in 00:34:12.158 64+0 records out 00:34:12.158 65536 bytes (66 kB, 64 KiB) copied, 0.00106984 s, 61.3 MB/s 00:34:12.158 13:33:52 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.dzq7eGxWJE --ob Nvme0n1 --bs 65536 --count 1 00:34:12.158 13:33:52 chaining -- bdev/chaining.sh@25 -- # local config 00:34:12.158 13:33:52 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:34:12.158 13:33:52 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:34:12.158 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:34:12.158 13:33:52 chaining -- bdev/chaining.sh@31 -- # config='{ 00:34:12.158 "subsystems": [ 00:34:12.158 { 00:34:12.158 "subsystem": "bdev", 00:34:12.158 "config": [ 00:34:12.158 { 00:34:12.158 "method": "bdev_nvme_attach_controller", 00:34:12.158 "params": { 00:34:12.158 "trtype": "tcp", 00:34:12.158 "adrfam": "IPv4", 00:34:12.158 "name": "Nvme0", 00:34:12.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:12.158 "traddr": "10.0.0.2", 00:34:12.158 "trsvcid": "4420" 00:34:12.158 } 00:34:12.158 }, 00:34:12.158 { 00:34:12.158 "method": "bdev_set_options", 00:34:12.158 "params": { 00:34:12.158 "bdev_auto_examine": false 00:34:12.158 } 00:34:12.158 } 00:34:12.158 ] 00:34:12.158 } 00:34:12.158 ] 00:34:12.158 }' 00:34:12.158 13:33:52 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.dzq7eGxWJE --ob Nvme0n1 --bs 65536 --count 1 00:34:12.158 13:33:52 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:34:12.158 "subsystems": [ 00:34:12.158 { 00:34:12.158 "subsystem": "bdev", 00:34:12.158 "config": [ 00:34:12.158 { 00:34:12.158 "method": "bdev_nvme_attach_controller", 00:34:12.158 "params": { 00:34:12.158 "trtype": "tcp", 00:34:12.158 "adrfam": "IPv4", 00:34:12.158 "name": "Nvme0", 00:34:12.158 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:12.158 "traddr": "10.0.0.2", 00:34:12.158 "trsvcid": "4420" 00:34:12.158 } 00:34:12.158 }, 00:34:12.158 { 00:34:12.158 "method": "bdev_set_options", 00:34:12.158 "params": { 00:34:12.158 "bdev_auto_examine": false 00:34:12.158 } 00:34:12.158 } 00:34:12.158 ] 00:34:12.158 } 00:34:12.158 ] 00:34:12.158 }' 00:34:12.158 [2024-07-26 13:33:52.551959] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:12.158 [2024-07-26 13:33:52.552016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910021 ] 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:12.158 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:12.158 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:12.158 [2024-07-26 13:33:52.683685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.416 [2024-07-26 13:33:52.766017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.942  Copying: 64/64 [kB] (average 15 MBps) 00:34:12.942 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:12.942 13:33:53 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.942 13:33:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:12.942 13:33:53 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:12.942 13:33:53 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.942 13:33:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:12.942 13:33:53 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:12.942 13:33:53 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:12.942 13:33:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:12.942 13:33:53 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:12.942 13:33:53 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:13.209 13:33:53 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.209 13:33:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:13.209 13:33:53 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@96 -- # update_stats 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:13.209 13:33:53 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.209 13:33:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:13.209 13:33:53 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:13.209 13:33:53 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.209 13:33:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:13.209 13:33:53 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:13.209 13:33:53 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:13.210 13:33:53 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:13.210 13:33:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:13.210 13:33:53 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:13.210 13:33:53 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.210 13:33:53 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:13.210 13:33:53 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.8ZpSm8e025 --ib Nvme0n1 --bs 65536 --count 1 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@25 -- # local config 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:34:13.210 13:33:53 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:34:13.210 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:34:13.468 13:33:53 chaining -- bdev/chaining.sh@31 -- # config='{ 00:34:13.468 "subsystems": [ 00:34:13.468 { 00:34:13.468 "subsystem": "bdev", 00:34:13.468 "config": [ 00:34:13.468 { 00:34:13.468 "method": "bdev_nvme_attach_controller", 00:34:13.468 "params": { 00:34:13.468 "trtype": "tcp", 00:34:13.468 "adrfam": "IPv4", 00:34:13.468 "name": "Nvme0", 00:34:13.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:13.468 "traddr": "10.0.0.2", 00:34:13.468 "trsvcid": "4420" 00:34:13.468 } 00:34:13.468 }, 00:34:13.468 { 00:34:13.468 "method": "bdev_set_options", 00:34:13.468 "params": { 00:34:13.468 "bdev_auto_examine": false 00:34:13.468 } 00:34:13.468 } 00:34:13.468 ] 00:34:13.468 } 00:34:13.468 ] 00:34:13.468 }' 00:34:13.468 13:33:53 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.8ZpSm8e025 --ib Nvme0n1 --bs 65536 --count 1 00:34:13.468 13:33:53 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:34:13.468 "subsystems": [ 00:34:13.468 { 00:34:13.468 "subsystem": "bdev", 00:34:13.468 "config": [ 00:34:13.468 { 00:34:13.468 "method": "bdev_nvme_attach_controller", 00:34:13.468 "params": { 00:34:13.468 "trtype": "tcp", 00:34:13.468 "adrfam": "IPv4", 00:34:13.468 "name": "Nvme0", 00:34:13.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:13.468 "traddr": "10.0.0.2", 00:34:13.468 "trsvcid": "4420" 00:34:13.468 } 00:34:13.468 }, 00:34:13.468 { 00:34:13.468 "method": "bdev_set_options", 00:34:13.468 "params": { 00:34:13.468 "bdev_auto_examine": false 00:34:13.468 } 00:34:13.468 } 00:34:13.468 ] 00:34:13.468 } 00:34:13.468 ] 00:34:13.468 }' 00:34:13.468 [2024-07-26 13:33:53.807158] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:13.468 [2024-07-26 13:33:53.807219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910315 ] 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:13.468 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:13.468 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:13.468 [2024-07-26 13:33:53.940831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.727 [2024-07-26 13:33:54.025008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.243  Copying: 64/64 [kB] (average 12 MBps) 00:34:14.243 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:14.243 13:33:54 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.243 13:33:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:14.243 13:33:54 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:14.243 13:33:54 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.243 13:33:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:14.243 13:33:54 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:14.243 13:33:54 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.243 13:33:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:14.243 13:33:54 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:14.243 13:33:54 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.243 13:33:54 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:14.243 13:33:54 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:34:14.243 13:33:54 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.dzq7eGxWJE /tmp/tmp.8ZpSm8e025 00:34:14.502 13:33:54 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:34:14.502 13:33:54 chaining -- bdev/chaining.sh@25 -- # local config 00:34:14.502 13:33:54 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:34:14.502 13:33:54 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:34:14.502 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:34:14.502 13:33:54 chaining -- bdev/chaining.sh@31 -- # config='{ 00:34:14.502 "subsystems": [ 00:34:14.502 { 00:34:14.502 "subsystem": "bdev", 00:34:14.502 "config": [ 00:34:14.502 { 00:34:14.502 "method": "bdev_nvme_attach_controller", 00:34:14.502 "params": { 00:34:14.502 "trtype": "tcp", 00:34:14.502 "adrfam": "IPv4", 00:34:14.502 "name": "Nvme0", 00:34:14.502 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:14.502 "traddr": "10.0.0.2", 00:34:14.502 "trsvcid": "4420" 00:34:14.502 } 00:34:14.502 }, 00:34:14.502 { 00:34:14.502 "method": "bdev_set_options", 00:34:14.502 "params": { 00:34:14.502 "bdev_auto_examine": false 00:34:14.502 } 00:34:14.502 } 00:34:14.502 ] 00:34:14.502 } 00:34:14.502 ] 00:34:14.502 }' 00:34:14.502 13:33:54 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:34:14.502 13:33:54 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:34:14.502 "subsystems": [ 00:34:14.502 { 00:34:14.502 "subsystem": "bdev", 00:34:14.502 "config": [ 00:34:14.502 { 00:34:14.502 "method": "bdev_nvme_attach_controller", 00:34:14.502 "params": { 00:34:14.502 "trtype": "tcp", 00:34:14.502 "adrfam": "IPv4", 00:34:14.502 "name": "Nvme0", 00:34:14.502 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:14.502 "traddr": "10.0.0.2", 00:34:14.502 "trsvcid": "4420" 00:34:14.502 } 00:34:14.502 }, 00:34:14.502 { 00:34:14.502 "method": "bdev_set_options", 00:34:14.502 "params": { 00:34:14.502 "bdev_auto_examine": false 00:34:14.502 } 00:34:14.502 } 00:34:14.502 ] 00:34:14.502 } 00:34:14.502 ] 00:34:14.502 }' 00:34:14.502 [2024-07-26 13:33:54.869476] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:14.502 [2024-07-26 13:33:54.869535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910380 ] 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:14.502 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:14.502 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:14.502 [2024-07-26 13:33:55.002077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.761 [2024-07-26 13:33:55.085228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.276  Copying: 64/64 [kB] (average 31 MBps) 00:34:15.276 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@106 -- # update_stats 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:15.276 13:33:55 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.276 13:33:55 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:15.276 13:33:55 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:15.276 13:33:55 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.276 13:33:55 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:15.276 13:33:55 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:15.276 13:33:55 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.276 13:33:55 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:15.276 13:33:55 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:15.276 13:33:55 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:15.277 13:33:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:15.277 13:33:55 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:15.277 13:33:55 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:15.277 13:33:55 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:15.277 13:33:55 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:15.277 13:33:55 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:15.277 13:33:55 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:15.277 13:33:55 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:34:15.277 13:33:55 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.dzq7eGxWJE --ob Nvme0n1 --bs 4096 --count 16 00:34:15.277 13:33:55 chaining -- bdev/chaining.sh@25 -- # local config 00:34:15.277 13:33:55 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:34:15.277 13:33:55 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:34:15.277 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:34:15.534 13:33:55 chaining -- bdev/chaining.sh@31 -- # config='{ 00:34:15.534 "subsystems": [ 00:34:15.534 { 00:34:15.534 "subsystem": "bdev", 00:34:15.534 "config": [ 00:34:15.534 { 00:34:15.534 "method": "bdev_nvme_attach_controller", 00:34:15.534 "params": { 00:34:15.534 "trtype": "tcp", 00:34:15.534 "adrfam": "IPv4", 00:34:15.534 "name": "Nvme0", 00:34:15.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:15.534 "traddr": "10.0.0.2", 00:34:15.534 "trsvcid": "4420" 00:34:15.534 } 00:34:15.534 }, 00:34:15.534 { 00:34:15.534 "method": "bdev_set_options", 00:34:15.534 "params": { 00:34:15.534 "bdev_auto_examine": false 00:34:15.534 } 00:34:15.534 } 00:34:15.534 ] 00:34:15.534 } 00:34:15.534 ] 00:34:15.534 }' 00:34:15.534 13:33:55 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.dzq7eGxWJE --ob Nvme0n1 --bs 4096 --count 16 00:34:15.534 13:33:55 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:34:15.534 "subsystems": [ 00:34:15.534 { 00:34:15.534 "subsystem": "bdev", 00:34:15.534 "config": [ 00:34:15.534 { 00:34:15.534 "method": "bdev_nvme_attach_controller", 00:34:15.534 "params": { 00:34:15.534 "trtype": "tcp", 00:34:15.534 "adrfam": "IPv4", 00:34:15.534 "name": "Nvme0", 00:34:15.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:15.534 "traddr": "10.0.0.2", 00:34:15.534 "trsvcid": "4420" 00:34:15.534 } 00:34:15.534 }, 00:34:15.534 { 00:34:15.534 "method": "bdev_set_options", 00:34:15.534 "params": { 00:34:15.534 "bdev_auto_examine": false 00:34:15.534 } 00:34:15.534 } 00:34:15.534 ] 00:34:15.534 } 00:34:15.534 ] 00:34:15.534 }' 00:34:15.534 [2024-07-26 13:33:55.902492] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:15.534 [2024-07-26 13:33:55.902553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910634 ] 00:34:15.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.534 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:15.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.534 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:15.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.534 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:15.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.534 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:15.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.534 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:15.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.534 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:15.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.534 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:15.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.534 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:15.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.534 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:15.534 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:15.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:15.535 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:15.535 [2024-07-26 13:33:56.035734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.793 [2024-07-26 13:33:56.118698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.051  Copying: 64/64 [kB] (average 12 MBps) 00:34:16.051 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:16.051 13:33:56 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.051 13:33:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:16.051 13:33:56 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:16.051 13:33:56 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:16.052 13:33:56 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:16.052 13:33:56 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.052 13:33:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@114 -- # update_stats 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:16.310 13:33:56 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:16.310 13:33:56 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:16.569 13:33:56 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:16.569 13:33:56 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:16.569 13:33:56 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@117 -- # : 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.8ZpSm8e025 --ib Nvme0n1 --bs 4096 --count 16 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@25 -- # local config 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:34:16.569 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@31 -- # config='{ 00:34:16.569 "subsystems": [ 00:34:16.569 { 00:34:16.569 "subsystem": "bdev", 00:34:16.569 "config": [ 00:34:16.569 { 00:34:16.569 "method": "bdev_nvme_attach_controller", 00:34:16.569 "params": { 00:34:16.569 "trtype": "tcp", 00:34:16.569 "adrfam": "IPv4", 00:34:16.569 "name": "Nvme0", 00:34:16.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.569 "traddr": "10.0.0.2", 00:34:16.569 "trsvcid": "4420" 00:34:16.569 } 00:34:16.569 }, 00:34:16.569 { 00:34:16.569 "method": "bdev_set_options", 00:34:16.569 "params": { 00:34:16.569 "bdev_auto_examine": false 00:34:16.569 } 00:34:16.569 } 00:34:16.569 ] 00:34:16.569 } 00:34:16.569 ] 00:34:16.569 }' 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:34:16.569 "subsystems": [ 00:34:16.569 { 00:34:16.569 "subsystem": "bdev", 00:34:16.569 "config": [ 00:34:16.569 { 00:34:16.569 "method": "bdev_nvme_attach_controller", 00:34:16.569 "params": { 00:34:16.569 "trtype": "tcp", 00:34:16.569 "adrfam": "IPv4", 00:34:16.569 "name": "Nvme0", 00:34:16.569 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:16.569 "traddr": "10.0.0.2", 00:34:16.569 "trsvcid": "4420" 00:34:16.569 } 00:34:16.569 }, 00:34:16.569 { 00:34:16.569 "method": "bdev_set_options", 00:34:16.569 "params": { 00:34:16.569 "bdev_auto_examine": false 00:34:16.569 } 00:34:16.569 } 00:34:16.569 ] 00:34:16.569 } 00:34:16.569 ] 00:34:16.569 }' 00:34:16.569 13:33:56 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.8ZpSm8e025 --ib Nvme0n1 --bs 4096 --count 16 00:34:16.569 [2024-07-26 13:33:56.989905] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:16.569 [2024-07-26 13:33:56.989966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid910912 ] 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:16.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.569 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:16.827 [2024-07-26 13:33:57.119918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.827 [2024-07-26 13:33:57.202324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.343  Copying: 64/64 [kB] (average 711 kBps) 00:34:17.343 00:34:17.343 13:33:57 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:34:17.343 13:33:57 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:17.343 13:33:57 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:17.343 13:33:57 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:17.344 13:33:57 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.344 13:33:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:17.344 13:33:57 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:17.344 13:33:57 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.344 13:33:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:17.344 13:33:57 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:17.344 13:33:57 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.344 13:33:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:17.344 13:33:57 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:34:17.344 13:33:57 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:17.344 13:33:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:17.344 13:33:57 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.dzq7eGxWJE /tmp/tmp.8ZpSm8e025 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.dzq7eGxWJE /tmp/tmp.8ZpSm8e025 00:34:17.344 13:33:57 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:34:17.344 13:33:57 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:17.344 13:33:57 chaining -- nvmf/common.sh@117 -- # sync 00:34:17.344 13:33:57 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:17.344 13:33:57 chaining -- nvmf/common.sh@120 -- # set +e 00:34:17.344 13:33:57 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:17.344 13:33:57 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:17.603 rmmod nvme_tcp 00:34:17.603 rmmod nvme_fabrics 00:34:17.603 rmmod nvme_keyring 00:34:17.603 13:33:57 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:17.603 13:33:57 chaining -- nvmf/common.sh@124 -- # set -e 00:34:17.603 13:33:57 chaining -- nvmf/common.sh@125 -- # return 0 00:34:17.603 13:33:57 chaining -- nvmf/common.sh@489 -- # '[' -n 909718 ']' 00:34:17.603 13:33:57 chaining -- nvmf/common.sh@490 -- # killprocess 909718 00:34:17.603 13:33:57 chaining -- common/autotest_common.sh@950 -- # '[' -z 909718 ']' 00:34:17.603 13:33:57 chaining -- common/autotest_common.sh@954 -- # kill -0 909718 00:34:17.603 13:33:57 chaining -- common/autotest_common.sh@955 -- # uname 00:34:17.603 13:33:57 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:17.603 13:33:57 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 909718 00:34:17.603 13:33:57 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:17.603 13:33:57 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:17.603 13:33:57 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 909718' 00:34:17.603 killing process with pid 909718 00:34:17.603 13:33:57 chaining -- common/autotest_common.sh@969 -- # kill 909718 00:34:17.603 13:33:57 chaining -- common/autotest_common.sh@974 -- # wait 909718 00:34:17.861 13:33:58 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:17.861 13:33:58 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:17.861 13:33:58 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:17.861 13:33:58 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:17.861 13:33:58 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:17.861 13:33:58 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.861 13:33:58 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:17.861 13:33:58 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:19.765 13:34:00 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:19.765 13:34:00 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:34:19.765 13:34:00 chaining -- bdev/chaining.sh@132 -- # bperfpid=911517 00:34:19.765 13:34:00 chaining -- bdev/chaining.sh@134 -- # waitforlisten 911517 00:34:19.765 13:34:00 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:34:19.765 13:34:00 chaining -- common/autotest_common.sh@831 -- # '[' -z 911517 ']' 00:34:19.765 13:34:00 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.765 13:34:00 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:19.765 13:34:00 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.765 13:34:00 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:19.765 13:34:00 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:20.024 [2024-07-26 13:34:00.337092] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:20.024 [2024-07-26 13:34:00.337162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911517 ] 00:34:20.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.024 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:20.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.024 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:20.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.024 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:20.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.024 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:20.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.024 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:20.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.024 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:20.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.024 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:20.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.024 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:20.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.024 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:20.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.024 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:20.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.024 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:20.024 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:20.025 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:20.025 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:20.025 [2024-07-26 13:34:00.455617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:20.025 [2024-07-26 13:34:00.538292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.960 13:34:01 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:20.960 13:34:01 chaining -- common/autotest_common.sh@864 -- # return 0 00:34:20.960 13:34:01 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:34:20.960 13:34:01 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:20.960 13:34:01 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:20.960 malloc0 00:34:20.960 true 00:34:20.960 true 00:34:20.960 [2024-07-26 13:34:01.374389] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:34:20.960 crypto0 00:34:20.960 [2024-07-26 13:34:01.382413] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:34:20.960 crypto1 00:34:20.960 13:34:01 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:20.960 13:34:01 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:21.218 Running I/O for 5 seconds... 00:34:26.486 00:34:26.486 Latency(us) 00:34:26.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:26.486 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:34:26.486 Verification LBA range: start 0x0 length 0x2000 00:34:26.486 crypto1 : 5.01 12433.47 48.57 0.00 0.00 20528.73 2202.01 13159.63 00:34:26.486 =================================================================================================================== 00:34:26.486 Total : 12433.47 48.57 0.00 0.00 20528.73 2202.01 13159.63 00:34:26.486 0 00:34:26.486 13:34:06 chaining -- bdev/chaining.sh@146 -- # killprocess 911517 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@950 -- # '[' -z 911517 ']' 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@954 -- # kill -0 911517 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@955 -- # uname 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 911517 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 911517' 00:34:26.486 killing process with pid 911517 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@969 -- # kill 911517 00:34:26.486 Received shutdown signal, test time was about 5.000000 seconds 00:34:26.486 00:34:26.486 Latency(us) 00:34:26.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:26.486 =================================================================================================================== 00:34:26.486 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@974 -- # wait 911517 00:34:26.486 13:34:06 chaining -- bdev/chaining.sh@152 -- # bperfpid=912584 00:34:26.486 13:34:06 chaining -- bdev/chaining.sh@154 -- # waitforlisten 912584 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@831 -- # '[' -z 912584 ']' 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:26.486 13:34:06 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:26.486 13:34:06 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:34:26.486 [2024-07-26 13:34:07.003617] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:26.486 [2024-07-26 13:34:07.003750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid912584 ] 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.745 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:26.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:26.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:26.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:26.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:26.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:26.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:26.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:26.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:26.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:26.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:26.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:26.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:26.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:26.746 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:26.746 [2024-07-26 13:34:07.209247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.004 [2024-07-26 13:34:07.292838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.940 13:34:08 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:27.940 13:34:08 chaining -- common/autotest_common.sh@864 -- # return 0 00:34:27.940 13:34:08 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:34:27.940 13:34:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:27.940 13:34:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:27.940 malloc0 00:34:27.940 true 00:34:27.940 true 00:34:27.940 [2024-07-26 13:34:08.269296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:34:27.940 [2024-07-26 13:34:08.269343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:27.940 [2024-07-26 13:34:08.269361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27113b0 00:34:27.940 [2024-07-26 13:34:08.269372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:27.940 [2024-07-26 13:34:08.270365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:27.940 [2024-07-26 13:34:08.270390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:34:27.940 pt0 00:34:27.940 [2024-07-26 13:34:08.277326] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:34:27.940 crypto0 00:34:27.940 [2024-07-26 13:34:08.285345] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:34:27.940 crypto1 00:34:27.940 13:34:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:27.940 13:34:08 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:34:28.198 Running I/O for 5 seconds... 00:34:33.503 00:34:33.503 Latency(us) 00:34:33.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.503 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:34:33.503 Verification LBA range: start 0x0 length 0x2000 00:34:33.503 crypto1 : 5.01 9703.01 37.90 0.00 0.00 26315.55 5976.88 16043.21 00:34:33.503 =================================================================================================================== 00:34:33.503 Total : 9703.01 37.90 0.00 0.00 26315.55 5976.88 16043.21 00:34:33.503 0 00:34:33.503 13:34:13 chaining -- bdev/chaining.sh@167 -- # killprocess 912584 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@950 -- # '[' -z 912584 ']' 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@954 -- # kill -0 912584 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@955 -- # uname 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 912584 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 912584' 00:34:33.503 killing process with pid 912584 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@969 -- # kill 912584 00:34:33.503 Received shutdown signal, test time was about 5.000000 seconds 00:34:33.503 00:34:33.503 Latency(us) 00:34:33.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:33.503 =================================================================================================================== 00:34:33.503 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@974 -- # wait 912584 00:34:33.503 13:34:13 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:34:33.503 13:34:13 chaining -- bdev/chaining.sh@170 -- # killprocess 912584 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@950 -- # '[' -z 912584 ']' 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@954 -- # kill -0 912584 00:34:33.503 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (912584) - No such process 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@977 -- # echo 'Process with pid 912584 is not found' 00:34:33.503 Process with pid 912584 is not found 00:34:33.503 13:34:13 chaining -- bdev/chaining.sh@171 -- # wait 912584 00:34:33.503 13:34:13 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:34:33.503 13:34:13 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@296 -- # e810=() 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@297 -- # x722=() 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@298 -- # mlx=() 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:33.503 13:34:13 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:34:33.504 Found 0000:20:00.0 (0x8086 - 0x159b) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:34:33.504 Found 0000:20:00.1 (0x8086 - 0x159b) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:34:33.504 Found net devices under 0000:20:00.0: cvl_0_0 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:34:33.504 Found net devices under 0000:20:00.1: cvl_0_1 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:33.504 13:34:13 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:33.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:34:33.763 00:34:33.763 --- 10.0.0.2 ping statistics --- 00:34:33.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.763 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:33.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:33.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:34:33.763 00:34:33.763 --- 10.0.0.1 ping statistics --- 00:34:33.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.763 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@422 -- # return 0 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:33.763 13:34:14 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:33.763 13:34:14 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:33.763 13:34:14 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@481 -- # nvmfpid=913679 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:33.763 13:34:14 chaining -- nvmf/common.sh@482 -- # waitforlisten 913679 00:34:33.763 13:34:14 chaining -- common/autotest_common.sh@831 -- # '[' -z 913679 ']' 00:34:33.763 13:34:14 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.763 13:34:14 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:33.763 13:34:14 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.763 13:34:14 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:33.763 13:34:14 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:33.763 [2024-07-26 13:34:14.222074] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:33.763 [2024-07-26 13:34:14.222135] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.022 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:34.022 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.023 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:34.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.023 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:34.023 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.023 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:34.023 [2024-07-26 13:34:14.349023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:34.023 [2024-07-26 13:34:14.432961] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:34.023 [2024-07-26 13:34:14.433005] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:34.023 [2024-07-26 13:34:14.433018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:34.023 [2024-07-26 13:34:14.433030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:34.023 [2024-07-26 13:34:14.433040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:34.023 [2024-07-26 13:34:14.433069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:34.589 13:34:15 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:34.589 13:34:15 chaining -- common/autotest_common.sh@864 -- # return 0 00:34:34.589 13:34:15 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:34.589 13:34:15 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:34.589 13:34:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:34.848 13:34:15 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:34.848 13:34:15 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:34:34.848 13:34:15 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:34.848 13:34:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:34.848 malloc0 00:34:34.848 [2024-07-26 13:34:15.162459] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:34.848 [2024-07-26 13:34:15.178662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:34.848 13:34:15 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:34.848 13:34:15 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:34:34.848 13:34:15 chaining -- bdev/chaining.sh@189 -- # bperfpid=913955 00:34:34.848 13:34:15 chaining -- bdev/chaining.sh@191 -- # waitforlisten 913955 /var/tmp/bperf.sock 00:34:34.848 13:34:15 chaining -- common/autotest_common.sh@831 -- # '[' -z 913955 ']' 00:34:34.848 13:34:15 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:34.848 13:34:15 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:34.848 13:34:15 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:34.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:34.848 13:34:15 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:34.848 13:34:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:34.848 13:34:15 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:34:34.848 [2024-07-26 13:34:15.246478] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:34.848 [2024-07-26 13:34:15.246535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid913955 ] 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:34.848 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.848 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:34.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.849 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:34.849 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:34.849 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:35.107 [2024-07-26 13:34:15.376951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:35.107 [2024-07-26 13:34:15.462867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.042 13:34:16 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:36.042 13:34:16 chaining -- common/autotest_common.sh@864 -- # return 0 00:34:36.042 13:34:16 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:34:36.042 13:34:16 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:34:36.300 [2024-07-26 13:34:16.811081] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:34:36.300 nvme0n1 00:34:36.300 true 00:34:36.300 crypto0 00:34:36.559 13:34:16 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:36.559 Running I/O for 5 seconds... 00:34:41.823 00:34:41.823 Latency(us) 00:34:41.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:41.823 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:34:41.823 Verification LBA range: start 0x0 length 0x2000 00:34:41.823 crypto0 : 5.02 9583.63 37.44 0.00 0.00 26627.74 2962.23 21915.24 00:34:41.823 =================================================================================================================== 00:34:41.823 Total : 9583.63 37.44 0.00 0.00 26627.74 2962.23 21915.24 00:34:41.823 0 00:34:41.823 13:34:22 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:34:41.823 13:34:22 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:34:41.823 13:34:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:41.823 13:34:22 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:41.823 13:34:22 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:41.823 13:34:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:34:41.823 13:34:22 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:41.823 13:34:22 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:34:41.823 13:34:22 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:41.823 13:34:22 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:42.081 13:34:22 chaining -- bdev/chaining.sh@205 -- # sequence=96206 00:34:42.081 13:34:22 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:34:42.081 13:34:22 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:34:42.081 13:34:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:42.081 13:34:22 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:42.081 13:34:22 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:42.081 13:34:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@206 -- # encrypt=48103 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:42.082 13:34:22 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:42.340 13:34:22 chaining -- bdev/chaining.sh@207 -- # decrypt=48103 00:34:42.340 13:34:22 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:34:42.340 13:34:22 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:34:42.340 13:34:22 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:42.340 13:34:22 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:42.340 13:34:22 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:34:42.340 13:34:22 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:34:42.340 13:34:22 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:34:42.340 13:34:22 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:34:42.340 13:34:22 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:34:42.340 13:34:22 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:42.597 13:34:23 chaining -- bdev/chaining.sh@208 -- # crc32c=96206 00:34:42.597 13:34:23 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:34:42.597 13:34:23 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:34:42.597 13:34:23 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:34:42.597 13:34:23 chaining -- bdev/chaining.sh@214 -- # killprocess 913955 00:34:42.597 13:34:23 chaining -- common/autotest_common.sh@950 -- # '[' -z 913955 ']' 00:34:42.597 13:34:23 chaining -- common/autotest_common.sh@954 -- # kill -0 913955 00:34:42.597 13:34:23 chaining -- common/autotest_common.sh@955 -- # uname 00:34:42.597 13:34:23 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:42.597 13:34:23 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 913955 00:34:42.597 13:34:23 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:42.597 13:34:23 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:42.597 13:34:23 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 913955' 00:34:42.597 killing process with pid 913955 00:34:42.597 13:34:23 chaining -- common/autotest_common.sh@969 -- # kill 913955 00:34:42.597 Received shutdown signal, test time was about 5.000000 seconds 00:34:42.597 00:34:42.597 Latency(us) 00:34:42.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.597 =================================================================================================================== 00:34:42.597 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:42.597 13:34:23 chaining -- common/autotest_common.sh@974 -- # wait 913955 00:34:42.855 13:34:23 chaining -- bdev/chaining.sh@219 -- # bperfpid=915288 00:34:42.855 13:34:23 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:34:42.855 13:34:23 chaining -- bdev/chaining.sh@221 -- # waitforlisten 915288 /var/tmp/bperf.sock 00:34:42.855 13:34:23 chaining -- common/autotest_common.sh@831 -- # '[' -z 915288 ']' 00:34:42.855 13:34:23 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:42.855 13:34:23 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:42.855 13:34:23 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:42.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:42.855 13:34:23 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:42.855 13:34:23 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:42.855 [2024-07-26 13:34:23.372302] Starting SPDK v24.09-pre git sha1 79c77cd86 / DPDK 24.03.0 initialization... 00:34:42.855 [2024-07-26 13:34:23.372364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915288 ] 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:43.114 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.114 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:43.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.115 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:43.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.115 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:43.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.115 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:43.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.115 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:43.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.115 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:43.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.115 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:43.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.115 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:43.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.115 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:43.115 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:43.115 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:43.115 [2024-07-26 13:34:23.502950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:43.115 [2024-07-26 13:34:23.589399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.048 13:34:24 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:44.048 13:34:24 chaining -- common/autotest_common.sh@864 -- # return 0 00:34:44.048 13:34:24 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:34:44.048 13:34:24 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:34:44.306 [2024-07-26 13:34:24.658266] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:34:44.306 nvme0n1 00:34:44.306 true 00:34:44.306 crypto0 00:34:44.306 13:34:24 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:44.306 Running I/O for 5 seconds... 00:34:49.569 00:34:49.569 Latency(us) 00:34:49.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:49.569 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:34:49.569 Verification LBA range: start 0x0 length 0x200 00:34:49.569 crypto0 : 5.01 1885.40 117.84 0.00 0.00 16625.96 1199.31 20342.37 00:34:49.569 =================================================================================================================== 00:34:49.569 Total : 1885.40 117.84 0.00 0.00 16625.96 1199.31 20342.37 00:34:49.569 0 00:34:49.569 13:34:29 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:34:49.569 13:34:29 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:34:49.569 13:34:29 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:49.569 13:34:29 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:34:49.569 13:34:29 chaining -- bdev/chaining.sh@39 -- # opcode= 00:34:49.569 13:34:29 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:34:49.569 13:34:29 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:34:49.569 13:34:29 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:34:49.569 13:34:29 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:34:49.569 13:34:29 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:49.569 13:34:30 chaining -- bdev/chaining.sh@233 -- # sequence=18886 00:34:49.569 13:34:30 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:34:49.569 13:34:30 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:34:49.569 13:34:30 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:49.569 13:34:30 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:49.569 13:34:30 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:34:49.569 13:34:30 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:34:49.569 13:34:30 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:34:49.569 13:34:30 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:34:49.569 13:34:30 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:34:49.569 13:34:30 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:49.828 13:34:30 chaining -- bdev/chaining.sh@234 -- # encrypt=9443 00:34:49.828 13:34:30 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:34:49.828 13:34:30 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:34:49.828 13:34:30 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:49.828 13:34:30 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:49.828 13:34:30 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:34:49.828 13:34:30 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:34:49.828 13:34:30 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:34:49.828 13:34:30 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:34:49.828 13:34:30 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:34:49.828 13:34:30 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:50.086 13:34:30 chaining -- bdev/chaining.sh@235 -- # decrypt=9443 00:34:50.086 13:34:30 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:34:50.086 13:34:30 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:34:50.086 13:34:30 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:34:50.086 13:34:30 chaining -- bdev/chaining.sh@39 -- # event=executed 00:34:50.086 13:34:30 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:34:50.086 13:34:30 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:34:50.086 13:34:30 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:34:50.086 13:34:30 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:34:50.086 13:34:30 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:34:50.086 13:34:30 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:34:50.345 13:34:30 chaining -- bdev/chaining.sh@236 -- # crc32c=18886 00:34:50.345 13:34:30 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:34:50.345 13:34:30 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:34:50.345 13:34:30 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:34:50.345 13:34:30 chaining -- bdev/chaining.sh@242 -- # killprocess 915288 00:34:50.345 13:34:30 chaining -- common/autotest_common.sh@950 -- # '[' -z 915288 ']' 00:34:50.345 13:34:30 chaining -- common/autotest_common.sh@954 -- # kill -0 915288 00:34:50.345 13:34:30 chaining -- common/autotest_common.sh@955 -- # uname 00:34:50.345 13:34:30 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:50.345 13:34:30 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 915288 00:34:50.345 13:34:30 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:50.345 13:34:30 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:50.345 13:34:30 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 915288' 00:34:50.345 killing process with pid 915288 00:34:50.345 13:34:30 chaining -- common/autotest_common.sh@969 -- # kill 915288 00:34:50.345 Received shutdown signal, test time was about 5.000000 seconds 00:34:50.345 00:34:50.345 Latency(us) 00:34:50.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.345 =================================================================================================================== 00:34:50.345 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:50.345 13:34:30 chaining -- common/autotest_common.sh@974 -- # wait 915288 00:34:50.604 13:34:31 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:34:50.604 13:34:31 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:50.604 13:34:31 chaining -- nvmf/common.sh@117 -- # sync 00:34:50.604 13:34:31 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:50.604 13:34:31 chaining -- nvmf/common.sh@120 -- # set +e 00:34:50.604 13:34:31 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:50.604 13:34:31 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:50.604 rmmod nvme_tcp 00:34:50.604 rmmod nvme_fabrics 00:34:50.604 rmmod nvme_keyring 00:34:50.604 13:34:31 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:50.604 13:34:31 chaining -- nvmf/common.sh@124 -- # set -e 00:34:50.604 13:34:31 chaining -- nvmf/common.sh@125 -- # return 0 00:34:50.604 13:34:31 chaining -- nvmf/common.sh@489 -- # '[' -n 913679 ']' 00:34:50.604 13:34:31 chaining -- nvmf/common.sh@490 -- # killprocess 913679 00:34:50.604 13:34:31 chaining -- common/autotest_common.sh@950 -- # '[' -z 913679 ']' 00:34:50.604 13:34:31 chaining -- common/autotest_common.sh@954 -- # kill -0 913679 00:34:50.604 13:34:31 chaining -- common/autotest_common.sh@955 -- # uname 00:34:50.604 13:34:31 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:50.604 13:34:31 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 913679 00:34:50.863 13:34:31 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:34:50.863 13:34:31 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:34:50.863 13:34:31 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 913679' 00:34:50.863 killing process with pid 913679 00:34:50.863 13:34:31 chaining -- common/autotest_common.sh@969 -- # kill 913679 00:34:50.863 13:34:31 chaining -- common/autotest_common.sh@974 -- # wait 913679 00:34:50.863 13:34:31 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:50.863 13:34:31 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:50.863 13:34:31 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:50.863 13:34:31 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:50.863 13:34:31 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:50.863 13:34:31 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:50.863 13:34:31 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:50.863 13:34:31 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.423 13:34:33 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:53.423 13:34:33 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:34:53.423 00:34:53.423 real 0m50.422s 00:34:53.423 user 1m1.776s 00:34:53.423 sys 0m12.946s 00:34:53.423 13:34:33 chaining -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:53.423 13:34:33 chaining -- common/autotest_common.sh@10 -- # set +x 00:34:53.423 ************************************ 00:34:53.423 END TEST chaining 00:34:53.423 ************************************ 00:34:53.423 13:34:33 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:34:53.423 13:34:33 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:34:53.423 13:34:33 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:34:53.423 13:34:33 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:34:53.423 13:34:33 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:34:53.423 13:34:33 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:34:53.423 13:34:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:53.423 13:34:33 -- common/autotest_common.sh@10 -- # set +x 00:34:53.423 13:34:33 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:34:53.423 13:34:33 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:34:53.423 13:34:33 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:34:53.423 13:34:33 -- common/autotest_common.sh@10 -- # set +x 00:34:59.986 INFO: APP EXITING 00:34:59.986 INFO: killing all VMs 00:34:59.986 INFO: killing vhost app 00:34:59.986 INFO: EXIT DONE 00:35:03.271 Waiting for block devices as requested 00:35:03.271 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:03.271 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:03.271 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:03.271 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:03.271 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:03.530 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:03.530 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:03.530 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:03.788 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:03.788 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:03.788 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:04.047 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:04.047 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:04.047 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:04.305 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:04.305 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:04.305 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:35:09.572 Cleaning 00:35:09.572 Removing: /var/run/dpdk/spdk0/config 00:35:09.572 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:09.572 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:09.572 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:09.572 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:09.572 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:09.572 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:09.572 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:09.572 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:09.572 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:09.572 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:09.572 Removing: /dev/shm/nvmf_trace.0 00:35:09.572 Removing: /dev/shm/spdk_tgt_trace.pid601470 00:35:09.572 Removing: /var/run/dpdk/spdk0 00:35:09.572 Removing: /var/run/dpdk/spdk_pid596025 00:35:09.572 Removing: /var/run/dpdk/spdk_pid600185 00:35:09.572 Removing: /var/run/dpdk/spdk_pid601470 00:35:09.572 Removing: /var/run/dpdk/spdk_pid602131 00:35:09.572 Removing: /var/run/dpdk/spdk_pid603206 00:35:09.572 Removing: /var/run/dpdk/spdk_pid603474 00:35:09.572 Removing: /var/run/dpdk/spdk_pid604388 00:35:09.572 Removing: /var/run/dpdk/spdk_pid604591 00:35:09.572 Removing: /var/run/dpdk/spdk_pid604968 00:35:09.572 Removing: /var/run/dpdk/spdk_pid608406 00:35:09.572 Removing: /var/run/dpdk/spdk_pid610527 00:35:09.572 Removing: /var/run/dpdk/spdk_pid610838 00:35:09.572 Removing: /var/run/dpdk/spdk_pid611160 00:35:09.572 Removing: /var/run/dpdk/spdk_pid611499 00:35:09.572 Removing: /var/run/dpdk/spdk_pid611824 00:35:09.572 Removing: /var/run/dpdk/spdk_pid612114 00:35:09.572 Removing: /var/run/dpdk/spdk_pid612392 00:35:09.572 Removing: /var/run/dpdk/spdk_pid612704 00:35:09.572 Removing: /var/run/dpdk/spdk_pid613544 00:35:09.572 Removing: /var/run/dpdk/spdk_pid616947 00:35:09.572 Removing: /var/run/dpdk/spdk_pid617236 00:35:09.572 Removing: /var/run/dpdk/spdk_pid617556 00:35:09.572 Removing: /var/run/dpdk/spdk_pid617864 00:35:09.572 Removing: /var/run/dpdk/spdk_pid617896 00:35:09.572 Removing: /var/run/dpdk/spdk_pid618199 00:35:09.572 Removing: /var/run/dpdk/spdk_pid618489 00:35:09.572 Removing: /var/run/dpdk/spdk_pid618766 00:35:09.572 Removing: /var/run/dpdk/spdk_pid619054 00:35:09.572 Removing: /var/run/dpdk/spdk_pid619331 00:35:09.572 Removing: /var/run/dpdk/spdk_pid619620 00:35:09.572 Removing: /var/run/dpdk/spdk_pid619897 00:35:09.572 Removing: /var/run/dpdk/spdk_pid620187 00:35:09.572 Removing: /var/run/dpdk/spdk_pid620465 00:35:09.572 Removing: /var/run/dpdk/spdk_pid620750 00:35:09.572 Removing: /var/run/dpdk/spdk_pid621030 00:35:09.572 Removing: /var/run/dpdk/spdk_pid621315 00:35:09.572 Removing: /var/run/dpdk/spdk_pid621595 00:35:09.572 Removing: /var/run/dpdk/spdk_pid621880 00:35:09.572 Removing: /var/run/dpdk/spdk_pid622163 00:35:09.572 Removing: /var/run/dpdk/spdk_pid622449 00:35:09.572 Removing: /var/run/dpdk/spdk_pid622727 00:35:09.572 Removing: /var/run/dpdk/spdk_pid623020 00:35:09.572 Removing: /var/run/dpdk/spdk_pid623302 00:35:09.572 Removing: /var/run/dpdk/spdk_pid623588 00:35:09.572 Removing: /var/run/dpdk/spdk_pid623864 00:35:09.572 Removing: /var/run/dpdk/spdk_pid624151 00:35:09.572 Removing: /var/run/dpdk/spdk_pid624443 00:35:09.572 Removing: /var/run/dpdk/spdk_pid624947 00:35:09.572 Removing: /var/run/dpdk/spdk_pid625271 00:35:09.572 Removing: /var/run/dpdk/spdk_pid625618 00:35:09.572 Removing: /var/run/dpdk/spdk_pid626103 00:35:09.572 Removing: /var/run/dpdk/spdk_pid626392 00:35:09.572 Removing: /var/run/dpdk/spdk_pid626907 00:35:09.572 Removing: /var/run/dpdk/spdk_pid626997 00:35:09.572 Removing: /var/run/dpdk/spdk_pid627342 00:35:09.572 Removing: /var/run/dpdk/spdk_pid627980 00:35:09.572 Removing: /var/run/dpdk/spdk_pid628282 00:35:09.572 Removing: /var/run/dpdk/spdk_pid628558 00:35:09.572 Removing: /var/run/dpdk/spdk_pid633958 00:35:09.572 Removing: /var/run/dpdk/spdk_pid636218 00:35:09.572 Removing: /var/run/dpdk/spdk_pid638231 00:35:09.572 Removing: /var/run/dpdk/spdk_pid639311 00:35:09.572 Removing: /var/run/dpdk/spdk_pid640650 00:35:09.572 Removing: /var/run/dpdk/spdk_pid641189 00:35:09.572 Removing: /var/run/dpdk/spdk_pid641210 00:35:09.572 Removing: /var/run/dpdk/spdk_pid641238 00:35:09.572 Removing: /var/run/dpdk/spdk_pid646287 00:35:09.572 Removing: /var/run/dpdk/spdk_pid646896 00:35:09.573 Removing: /var/run/dpdk/spdk_pid648142 00:35:09.573 Removing: /var/run/dpdk/spdk_pid648256 00:35:09.573 Removing: /var/run/dpdk/spdk_pid657349 00:35:09.573 Removing: /var/run/dpdk/spdk_pid659167 00:35:09.573 Removing: /var/run/dpdk/spdk_pid660323 00:35:09.573 Removing: /var/run/dpdk/spdk_pid665899 00:35:09.573 Removing: /var/run/dpdk/spdk_pid667719 00:35:09.573 Removing: /var/run/dpdk/spdk_pid668879 00:35:09.573 Removing: /var/run/dpdk/spdk_pid673918 00:35:09.573 Removing: /var/run/dpdk/spdk_pid676665 00:35:09.573 Removing: /var/run/dpdk/spdk_pid677782 00:35:09.573 Removing: /var/run/dpdk/spdk_pid689389 00:35:09.573 Removing: /var/run/dpdk/spdk_pid691801 00:35:09.573 Removing: /var/run/dpdk/spdk_pid693109 00:35:09.573 Removing: /var/run/dpdk/spdk_pid705111 00:35:09.573 Removing: /var/run/dpdk/spdk_pid707767 00:35:09.573 Removing: /var/run/dpdk/spdk_pid708928 00:35:09.573 Removing: /var/run/dpdk/spdk_pid720526 00:35:09.573 Removing: /var/run/dpdk/spdk_pid724721 00:35:09.573 Removing: /var/run/dpdk/spdk_pid726087 00:35:09.573 Removing: /var/run/dpdk/spdk_pid739443 00:35:09.573 Removing: /var/run/dpdk/spdk_pid742413 00:35:09.573 Removing: /var/run/dpdk/spdk_pid743833 00:35:09.573 Removing: /var/run/dpdk/spdk_pid756630 00:35:09.573 Removing: /var/run/dpdk/spdk_pid759597 00:35:09.573 Removing: /var/run/dpdk/spdk_pid760780 00:35:09.573 Removing: /var/run/dpdk/spdk_pid774115 00:35:09.573 Removing: /var/run/dpdk/spdk_pid778597 00:35:09.573 Removing: /var/run/dpdk/spdk_pid779759 00:35:09.573 Removing: /var/run/dpdk/spdk_pid780998 00:35:09.573 Removing: /var/run/dpdk/spdk_pid784584 00:35:09.573 Removing: /var/run/dpdk/spdk_pid790594 00:35:09.573 Removing: /var/run/dpdk/spdk_pid793737 00:35:09.573 Removing: /var/run/dpdk/spdk_pid799473 00:35:09.573 Removing: /var/run/dpdk/spdk_pid803834 00:35:09.573 Removing: /var/run/dpdk/spdk_pid810242 00:35:09.573 Removing: /var/run/dpdk/spdk_pid813848 00:35:09.573 Removing: /var/run/dpdk/spdk_pid821492 00:35:09.573 Removing: /var/run/dpdk/spdk_pid824188 00:35:09.573 Removing: /var/run/dpdk/spdk_pid831563 00:35:09.573 Removing: /var/run/dpdk/spdk_pid834469 00:35:09.573 Removing: /var/run/dpdk/spdk_pid842279 00:35:09.573 Removing: /var/run/dpdk/spdk_pid844975 00:35:09.573 Removing: /var/run/dpdk/spdk_pid850173 00:35:09.573 Removing: /var/run/dpdk/spdk_pid850473 00:35:09.573 Removing: /var/run/dpdk/spdk_pid850984 00:35:09.573 Removing: /var/run/dpdk/spdk_pid851517 00:35:09.573 Removing: /var/run/dpdk/spdk_pid852120 00:35:09.573 Removing: /var/run/dpdk/spdk_pid852987 00:35:09.573 Removing: /var/run/dpdk/spdk_pid853925 00:35:09.573 Removing: /var/run/dpdk/spdk_pid854289 00:35:09.573 Removing: /var/run/dpdk/spdk_pid856446 00:35:09.573 Removing: /var/run/dpdk/spdk_pid858725 00:35:09.573 Removing: /var/run/dpdk/spdk_pid860938 00:35:09.573 Removing: /var/run/dpdk/spdk_pid862609 00:35:09.573 Removing: /var/run/dpdk/spdk_pid864916 00:35:09.573 Removing: /var/run/dpdk/spdk_pid867464 00:35:09.573 Removing: /var/run/dpdk/spdk_pid869804 00:35:09.573 Removing: /var/run/dpdk/spdk_pid871694 00:35:09.573 Removing: /var/run/dpdk/spdk_pid872286 00:35:09.573 Removing: /var/run/dpdk/spdk_pid872831 00:35:09.573 Removing: /var/run/dpdk/spdk_pid875429 00:35:09.573 Removing: /var/run/dpdk/spdk_pid877705 00:35:09.573 Removing: /var/run/dpdk/spdk_pid880106 00:35:09.573 Removing: /var/run/dpdk/spdk_pid881470 00:35:09.573 Removing: /var/run/dpdk/spdk_pid882892 00:35:09.573 Removing: /var/run/dpdk/spdk_pid883632 00:35:09.573 Removing: /var/run/dpdk/spdk_pid883719 00:35:09.573 Removing: /var/run/dpdk/spdk_pid883783 00:35:09.573 Removing: /var/run/dpdk/spdk_pid884139 00:35:09.573 Removing: /var/run/dpdk/spdk_pid884349 00:35:09.573 Removing: /var/run/dpdk/spdk_pid885784 00:35:09.573 Removing: /var/run/dpdk/spdk_pid887741 00:35:09.573 Removing: /var/run/dpdk/spdk_pid889585 00:35:09.573 Removing: /var/run/dpdk/spdk_pid890641 00:35:09.573 Removing: /var/run/dpdk/spdk_pid891654 00:35:09.573 Removing: /var/run/dpdk/spdk_pid891991 00:35:09.573 Removing: /var/run/dpdk/spdk_pid892015 00:35:09.573 Removing: /var/run/dpdk/spdk_pid892044 00:35:09.573 Removing: /var/run/dpdk/spdk_pid893174 00:35:09.573 Removing: /var/run/dpdk/spdk_pid893966 00:35:09.573 Removing: /var/run/dpdk/spdk_pid894508 00:35:09.573 Removing: /var/run/dpdk/spdk_pid896873 00:35:09.573 Removing: /var/run/dpdk/spdk_pid899700 00:35:09.573 Removing: /var/run/dpdk/spdk_pid902109 00:35:09.573 Removing: /var/run/dpdk/spdk_pid903445 00:35:09.573 Removing: /var/run/dpdk/spdk_pid904853 00:35:09.573 Removing: /var/run/dpdk/spdk_pid905596 00:35:09.573 Removing: /var/run/dpdk/spdk_pid905673 00:35:09.573 Removing: /var/run/dpdk/spdk_pid910021 00:35:09.573 Removing: /var/run/dpdk/spdk_pid910315 00:35:09.573 Removing: /var/run/dpdk/spdk_pid910380 00:35:09.573 Removing: /var/run/dpdk/spdk_pid910634 00:35:09.573 Removing: /var/run/dpdk/spdk_pid910912 00:35:09.573 Removing: /var/run/dpdk/spdk_pid911517 00:35:09.573 Removing: /var/run/dpdk/spdk_pid912584 00:35:09.573 Removing: /var/run/dpdk/spdk_pid913955 00:35:09.832 Removing: /var/run/dpdk/spdk_pid915288 00:35:09.832 Clean 00:35:09.832 13:34:50 -- common/autotest_common.sh@1451 -- # return 0 00:35:09.832 13:34:50 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:35:09.832 13:34:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:09.832 13:34:50 -- common/autotest_common.sh@10 -- # set +x 00:35:09.832 13:34:50 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:35:09.832 13:34:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:09.832 13:34:50 -- common/autotest_common.sh@10 -- # set +x 00:35:09.832 13:34:50 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:35:09.832 13:34:50 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:35:09.832 13:34:50 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:35:09.832 13:34:50 -- spdk/autotest.sh@395 -- # hash lcov 00:35:09.832 13:34:50 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:35:09.832 13:34:50 -- spdk/autotest.sh@397 -- # hostname 00:35:09.832 13:34:50 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-19 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:35:10.090 geninfo: WARNING: invalid characters removed from testname! 00:35:36.650 13:35:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:35:39.181 13:35:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:35:41.707 13:35:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:35:44.234 13:35:24 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:35:46.763 13:35:26 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:35:48.663 13:35:29 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:35:51.191 13:35:31 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:51.191 13:35:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:35:51.191 13:35:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:35:51.191 13:35:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:51.191 13:35:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:51.191 13:35:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.191 13:35:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.191 13:35:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.191 13:35:31 -- paths/export.sh@5 -- $ export PATH 00:35:51.191 13:35:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:51.191 13:35:31 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:35:51.191 13:35:31 -- common/autobuild_common.sh@447 -- $ date +%s 00:35:51.191 13:35:31 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721993731.XXXXXX 00:35:51.191 13:35:31 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721993731.DRugQM 00:35:51.191 13:35:31 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:35:51.191 13:35:31 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:35:51.191 13:35:31 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:35:51.191 13:35:31 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:35:51.191 13:35:31 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:35:51.191 13:35:31 -- common/autobuild_common.sh@463 -- $ get_config_params 00:35:51.191 13:35:31 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:35:51.191 13:35:31 -- common/autotest_common.sh@10 -- $ set +x 00:35:51.191 13:35:31 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:35:51.191 13:35:31 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:35:51.191 13:35:31 -- pm/common@17 -- $ local monitor 00:35:51.191 13:35:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:51.191 13:35:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:51.191 13:35:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:51.191 13:35:31 -- pm/common@21 -- $ date +%s 00:35:51.192 13:35:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:51.192 13:35:31 -- pm/common@21 -- $ date +%s 00:35:51.192 13:35:31 -- pm/common@21 -- $ date +%s 00:35:51.192 13:35:31 -- pm/common@25 -- $ sleep 1 00:35:51.192 13:35:31 -- pm/common@21 -- $ date +%s 00:35:51.192 13:35:31 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721993731 00:35:51.192 13:35:31 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721993731 00:35:51.192 13:35:31 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721993731 00:35:51.192 13:35:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721993731 00:35:51.450 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721993731_collect-vmstat.pm.log 00:35:51.450 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721993731_collect-cpu-temp.pm.log 00:35:51.450 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721993731_collect-cpu-load.pm.log 00:35:51.450 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721993731_collect-bmc-pm.bmc.pm.log 00:35:52.387 13:35:32 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:35:52.387 13:35:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:35:52.387 13:35:32 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:35:52.387 13:35:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:35:52.387 13:35:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:35:52.387 13:35:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:35:52.387 13:35:32 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:52.387 13:35:32 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:35:52.387 13:35:32 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:35:52.387 13:35:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:35:52.387 13:35:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:35:52.387 13:35:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:35:52.387 13:35:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:35:52.387 13:35:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:52.387 13:35:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:35:52.387 13:35:32 -- pm/common@44 -- $ pid=928122 00:35:52.387 13:35:32 -- pm/common@50 -- $ kill -TERM 928122 00:35:52.387 13:35:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:52.387 13:35:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:35:52.387 13:35:32 -- pm/common@44 -- $ pid=928124 00:35:52.387 13:35:32 -- pm/common@50 -- $ kill -TERM 928124 00:35:52.387 13:35:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:52.387 13:35:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:35:52.387 13:35:32 -- pm/common@44 -- $ pid=928126 00:35:52.387 13:35:32 -- pm/common@50 -- $ kill -TERM 928126 00:35:52.387 13:35:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:35:52.387 13:35:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:35:52.387 13:35:32 -- pm/common@44 -- $ pid=928150 00:35:52.387 13:35:32 -- pm/common@50 -- $ sudo -E kill -TERM 928150 00:35:52.387 + [[ -n 467083 ]] 00:35:52.387 + sudo kill 467083 00:35:52.397 [Pipeline] } 00:35:52.417 [Pipeline] // stage 00:35:52.423 [Pipeline] } 00:35:52.440 [Pipeline] // timeout 00:35:52.445 [Pipeline] } 00:35:52.463 [Pipeline] // catchError 00:35:52.468 [Pipeline] } 00:35:52.487 [Pipeline] // wrap 00:35:52.493 [Pipeline] } 00:35:52.509 [Pipeline] // catchError 00:35:52.519 [Pipeline] stage 00:35:52.522 [Pipeline] { (Epilogue) 00:35:52.537 [Pipeline] catchError 00:35:52.539 [Pipeline] { 00:35:52.554 [Pipeline] echo 00:35:52.556 Cleanup processes 00:35:52.562 [Pipeline] sh 00:35:52.844 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:35:52.844 928229 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:35:52.844 928572 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:35:52.858 [Pipeline] sh 00:35:53.197 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:35:53.197 ++ grep -v 'sudo pgrep' 00:35:53.197 ++ awk '{print $1}' 00:35:53.197 + sudo kill -9 928229 00:35:53.208 [Pipeline] sh 00:35:53.488 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:53.488 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:36:01.601 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:36:06.882 [Pipeline] sh 00:36:07.165 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:07.165 Artifacts sizes are good 00:36:07.179 [Pipeline] archiveArtifacts 00:36:07.187 Archiving artifacts 00:36:07.294 [Pipeline] sh 00:36:07.572 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:36:07.586 [Pipeline] cleanWs 00:36:07.596 [WS-CLEANUP] Deleting project workspace... 00:36:07.596 [WS-CLEANUP] Deferred wipeout is used... 00:36:07.602 [WS-CLEANUP] done 00:36:07.604 [Pipeline] } 00:36:07.625 [Pipeline] // catchError 00:36:07.637 [Pipeline] sh 00:36:07.919 + logger -p user.info -t JENKINS-CI 00:36:07.927 [Pipeline] } 00:36:07.942 [Pipeline] // stage 00:36:07.946 [Pipeline] } 00:36:07.963 [Pipeline] // node 00:36:07.970 [Pipeline] End of Pipeline 00:36:07.999 Finished: SUCCESS